Test Report: Docker_Linux_crio 21625

                    
                      f5ddb069c61c98d891ee28fed061fe1ee97ea306:2025-10-03:41753
                    
                

Test fail (56/166)

Order failed test Duration
27 TestAddons/Setup 520.49
37 TestErrorSpam/setup 496.39
46 TestFunctional/serial/StartWithProxy 499.5
48 TestFunctional/serial/SoftStart 366.08
50 TestFunctional/serial/KubectlGetPods 1.98
60 TestFunctional/serial/MinikubeKubectlCmd 2.06
61 TestFunctional/serial/MinikubeKubectlCmdDirectly 2.03
62 TestFunctional/serial/ExtraConfig 733.9
63 TestFunctional/serial/ComponentHealth 1.85
66 TestFunctional/serial/InvalidService 0.05
69 TestFunctional/parallel/DashboardCmd 1.7
72 TestFunctional/parallel/StatusCmd 3.24
76 TestFunctional/parallel/ServiceCmdConnect 1.49
78 TestFunctional/parallel/PersistentVolumeClaim 241.49
82 TestFunctional/parallel/MySQL 1.32
88 TestFunctional/parallel/NodeLabels 2.3
93 TestFunctional/parallel/ServiceCmd/DeployApp 0.07
94 TestFunctional/parallel/ServiceCmd/List 0.3
97 TestFunctional/parallel/ServiceCmd/JSONOutput 0.29
104 TestFunctional/parallel/ServiceCmd/HTTPS 0.29
105 TestFunctional/parallel/ServiceCmd/Format 0.31
106 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.07
107 TestFunctional/parallel/ServiceCmd/URL 0.31
108 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.12
109 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.28
114 TestFunctional/parallel/MountCmd/any-port 2.48
116 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.35
119 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.23
120 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.39
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.3
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0.07
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 94.78
140 TestMultiControlPlane/serial/StartCluster 500.73
141 TestMultiControlPlane/serial/DeployApp 95.37
142 TestMultiControlPlane/serial/PingHostFromPods 1.35
143 TestMultiControlPlane/serial/AddWorkerNode 1.51
144 TestMultiControlPlane/serial/NodeLabels 1.29
145 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.54
146 TestMultiControlPlane/serial/CopyFile 1.52
147 TestMultiControlPlane/serial/StopSecondaryNode 1.58
148 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 1.58
149 TestMultiControlPlane/serial/RestartSecondaryNode 43.32
150 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.54
151 TestMultiControlPlane/serial/RestartClusterKeepsNodes 369.95
152 TestMultiControlPlane/serial/DeleteSecondaryNode 1.78
153 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.55
154 TestMultiControlPlane/serial/StopCluster 1.39
155 TestMultiControlPlane/serial/RestartCluster 368.3
156 TestMultiControlPlane/serial/DegradedAfterClusterRestart 1.56
157 TestMultiControlPlane/serial/AddSecondaryNode 1.48
158 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.55
162 TestJSONOutput/start/Command 496.13
165 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
166 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestMinikubeProfile 500.66
220 TestMultiNode/serial/ValidateNameConflict 7200.059
x
+
TestAddons/Setup (520.49s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-051972 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-051972 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: exit status 80 (8m40.461882764s)

                                                
                                                
-- stdout --
	* [addons-051972] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21625
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "addons-051972" primary control-plane node in "addons-051972" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:42:41.886555   13541 out.go:360] Setting OutFile to fd 1 ...
	I1003 17:42:41.886790   13541 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 17:42:41.886799   13541 out.go:374] Setting ErrFile to fd 2...
	I1003 17:42:41.886803   13541 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 17:42:41.887022   13541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 17:42:41.887526   13541 out.go:368] Setting JSON to false
	I1003 17:42:41.888273   13541 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1513,"bootTime":1759511849,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 17:42:41.888350   13541 start.go:140] virtualization: kvm guest
	I1003 17:42:41.890302   13541 out.go:179] * [addons-051972] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 17:42:41.891481   13541 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 17:42:41.891488   13541 notify.go:220] Checking for updates...
	I1003 17:42:41.893831   13541 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:42:41.895072   13541 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 17:42:41.899496   13541 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 17:42:41.900583   13541 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 17:42:41.901666   13541 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:42:41.902924   13541 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 17:42:41.925409   13541 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 17:42:41.925485   13541 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 17:42:41.978011   13541 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-03 17:42:41.969011379 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 17:42:41.978143   13541 docker.go:318] overlay module found
	I1003 17:42:41.979881   13541 out.go:179] * Using the docker driver based on user configuration
	I1003 17:42:41.981014   13541 start.go:304] selected driver: docker
	I1003 17:42:41.981027   13541 start.go:924] validating driver "docker" against <nil>
	I1003 17:42:41.981039   13541 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:42:41.981585   13541 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 17:42:42.033591   13541 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-03 17:42:42.024871319 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 17:42:42.033796   13541 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1003 17:42:42.034093   13541 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 17:42:42.035526   13541 out.go:179] * Using Docker driver with root privileges
	I1003 17:42:42.036605   13541 cni.go:84] Creating CNI manager for ""
	I1003 17:42:42.036674   13541 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 17:42:42.036688   13541 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 17:42:42.036749   13541 start.go:348] cluster config:
	{Name:addons-051972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-051972 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1003 17:42:42.038032   13541 out.go:179] * Starting "addons-051972" primary control-plane node in "addons-051972" cluster
	I1003 17:42:42.039065   13541 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 17:42:42.040115   13541 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 17:42:42.041070   13541 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 17:42:42.041098   13541 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 17:42:42.041105   13541 cache.go:58] Caching tarball of preloaded images
	I1003 17:42:42.041166   13541 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 17:42:42.041184   13541 preload.go:233] Found /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1003 17:42:42.041194   13541 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 17:42:42.041534   13541 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/addons-051972/config.json ...
	I1003 17:42:42.041556   13541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/addons-051972/config.json: {Name:mk4b7ca62f2d3e0f7b8c96cf9e865d0400b8283c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:42:42.057178   13541 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1003 17:42:42.057297   13541 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1003 17:42:42.057330   13541 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1003 17:42:42.057339   13541 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1003 17:42:42.057347   13541 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1003 17:42:42.057354   13541 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from local cache
	I1003 17:42:54.363660   13541 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from cached tarball
	I1003 17:42:54.363693   13541 cache.go:232] Successfully downloaded all kic artifacts
	I1003 17:42:54.363730   13541 start.go:360] acquireMachinesLock for addons-051972: {Name:mk7759cc119d82b635346dbb5be57827b9835ee8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:42:54.363831   13541 start.go:364] duration metric: took 72.173µs to acquireMachinesLock for "addons-051972"
	I1003 17:42:54.363855   13541 start.go:93] Provisioning new machine with config: &{Name:addons-051972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-051972 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 17:42:54.363918   13541 start.go:125] createHost starting for "" (driver="docker")
	I1003 17:42:54.366753   13541 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1003 17:42:54.366959   13541 start.go:159] libmachine.API.Create for "addons-051972" (driver="docker")
	I1003 17:42:54.367006   13541 client.go:168] LocalClient.Create starting
	I1003 17:42:54.367091   13541 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem
	I1003 17:42:54.646826   13541 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem
	I1003 17:42:54.779928   13541 cli_runner.go:164] Run: docker network inspect addons-051972 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 17:42:54.797315   13541 cli_runner.go:211] docker network inspect addons-051972 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 17:42:54.797374   13541 network_create.go:284] running [docker network inspect addons-051972] to gather additional debugging logs...
	I1003 17:42:54.797391   13541 cli_runner.go:164] Run: docker network inspect addons-051972
	W1003 17:42:54.812901   13541 cli_runner.go:211] docker network inspect addons-051972 returned with exit code 1
	I1003 17:42:54.812931   13541 network_create.go:287] error running [docker network inspect addons-051972]: docker network inspect addons-051972: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-051972 not found
	I1003 17:42:54.812942   13541 network_create.go:289] output of [docker network inspect addons-051972]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-051972 not found
	
	** /stderr **
	I1003 17:42:54.813105   13541 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 17:42:54.829032   13541 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f9f260}
	I1003 17:42:54.829075   13541 network_create.go:124] attempt to create docker network addons-051972 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1003 17:42:54.829114   13541 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-051972 addons-051972
	I1003 17:42:54.882993   13541 network_create.go:108] docker network addons-051972 192.168.49.0/24 created
	I1003 17:42:54.883022   13541 kic.go:121] calculated static IP "192.168.49.2" for the "addons-051972" container
	I1003 17:42:54.883082   13541 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 17:42:54.898512   13541 cli_runner.go:164] Run: docker volume create addons-051972 --label name.minikube.sigs.k8s.io=addons-051972 --label created_by.minikube.sigs.k8s.io=true
	I1003 17:42:54.915059   13541 oci.go:103] Successfully created a docker volume addons-051972
	I1003 17:42:54.915145   13541 cli_runner.go:164] Run: docker run --rm --name addons-051972-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-051972 --entrypoint /usr/bin/test -v addons-051972:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1003 17:43:01.585061   13541 cli_runner.go:217] Completed: docker run --rm --name addons-051972-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-051972 --entrypoint /usr/bin/test -v addons-051972:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib: (6.669876274s)
	I1003 17:43:01.585105   13541 oci.go:107] Successfully prepared a docker volume addons-051972
	I1003 17:43:01.585135   13541 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 17:43:01.585164   13541 kic.go:194] Starting extracting preloaded images to volume ...
	I1003 17:43:01.585237   13541 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-051972:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1003 17:43:05.985989   13541 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-051972:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.40065927s)
	I1003 17:43:05.986033   13541 kic.go:203] duration metric: took 4.400867619s to extract preloaded images to volume ...
	W1003 17:43:05.986173   13541 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1003 17:43:05.986208   13541 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1003 17:43:05.986255   13541 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1003 17:43:06.037069   13541 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-051972 --name addons-051972 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-051972 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-051972 --network addons-051972 --ip 192.168.49.2 --volume addons-051972:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1003 17:43:06.316183   13541 cli_runner.go:164] Run: docker container inspect addons-051972 --format={{.State.Running}}
	I1003 17:43:06.333777   13541 cli_runner.go:164] Run: docker container inspect addons-051972 --format={{.State.Status}}
	I1003 17:43:06.352666   13541 cli_runner.go:164] Run: docker exec addons-051972 stat /var/lib/dpkg/alternatives/iptables
	I1003 17:43:06.401843   13541 oci.go:144] the created container "addons-051972" has a running status.
	I1003 17:43:06.401871   13541 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/addons-051972/id_rsa...
	I1003 17:43:06.504854   13541 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21625-8669/.minikube/machines/addons-051972/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1003 17:43:06.529652   13541 cli_runner.go:164] Run: docker container inspect addons-051972 --format={{.State.Status}}
	I1003 17:43:06.550969   13541 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1003 17:43:06.551043   13541 kic_runner.go:114] Args: [docker exec --privileged addons-051972 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1003 17:43:06.602302   13541 cli_runner.go:164] Run: docker container inspect addons-051972 --format={{.State.Status}}
	I1003 17:43:06.624706   13541 machine.go:93] provisionDockerMachine start ...
	I1003 17:43:06.624820   13541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051972
	I1003 17:43:06.649100   13541 main.go:141] libmachine: Using SSH client type: native
	I1003 17:43:06.649390   13541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1003 17:43:06.649410   13541 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 17:43:06.650256   13541 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38276->127.0.0.1:32768: read: connection reset by peer
	I1003 17:43:09.792918   13541 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-051972
	
	I1003 17:43:09.792945   13541 ubuntu.go:182] provisioning hostname "addons-051972"
	I1003 17:43:09.793026   13541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051972
	I1003 17:43:09.810534   13541 main.go:141] libmachine: Using SSH client type: native
	I1003 17:43:09.810740   13541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1003 17:43:09.810753   13541 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-051972 && echo "addons-051972" | sudo tee /etc/hostname
	I1003 17:43:09.961208   13541 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-051972
	
	I1003 17:43:09.961278   13541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051972
	I1003 17:43:09.978701   13541 main.go:141] libmachine: Using SSH client type: native
	I1003 17:43:09.978940   13541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1003 17:43:09.978964   13541 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-051972' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-051972/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-051972' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 17:43:10.121238   13541 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 17:43:10.121266   13541 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8669/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8669/.minikube}
	I1003 17:43:10.121309   13541 ubuntu.go:190] setting up certificates
	I1003 17:43:10.121326   13541 provision.go:84] configureAuth start
	I1003 17:43:10.121376   13541 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-051972
	I1003 17:43:10.138773   13541 provision.go:143] copyHostCerts
	I1003 17:43:10.138854   13541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem (1082 bytes)
	I1003 17:43:10.138992   13541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem (1123 bytes)
	I1003 17:43:10.139085   13541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem (1675 bytes)
	I1003 17:43:10.139160   13541 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem org=jenkins.addons-051972 san=[127.0.0.1 192.168.49.2 addons-051972 localhost minikube]
	I1003 17:43:10.272687   13541 provision.go:177] copyRemoteCerts
	I1003 17:43:10.272764   13541 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 17:43:10.272813   13541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051972
	I1003 17:43:10.290194   13541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/addons-051972/id_rsa Username:docker}
	I1003 17:43:10.390064   13541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 17:43:10.407675   13541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1003 17:43:10.423944   13541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 17:43:10.439942   13541 provision.go:87] duration metric: took 318.598913ms to configureAuth
	I1003 17:43:10.439969   13541 ubuntu.go:206] setting minikube options for container-runtime
	I1003 17:43:10.440183   13541 config.go:182] Loaded profile config "addons-051972": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 17:43:10.440293   13541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051972
	I1003 17:43:10.458463   13541 main.go:141] libmachine: Using SSH client type: native
	I1003 17:43:10.458687   13541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1003 17:43:10.458708   13541 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 17:43:10.703723   13541 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 17:43:10.703749   13541 machine.go:96] duration metric: took 4.079010803s to provisionDockerMachine
	I1003 17:43:10.703760   13541 client.go:171] duration metric: took 16.336745076s to LocalClient.Create
	I1003 17:43:10.703780   13541 start.go:167] duration metric: took 16.336820134s to libmachine.API.Create "addons-051972"
	I1003 17:43:10.703794   13541 start.go:293] postStartSetup for "addons-051972" (driver="docker")
	I1003 17:43:10.703808   13541 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 17:43:10.703881   13541 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 17:43:10.703969   13541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051972
	I1003 17:43:10.721098   13541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/addons-051972/id_rsa Username:docker}
	I1003 17:43:10.822186   13541 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 17:43:10.825476   13541 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 17:43:10.825499   13541 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 17:43:10.825509   13541 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/addons for local assets ...
	I1003 17:43:10.825571   13541 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/files for local assets ...
	I1003 17:43:10.825598   13541 start.go:296] duration metric: took 121.797459ms for postStartSetup
	I1003 17:43:10.825947   13541 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-051972
	I1003 17:43:10.843121   13541 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/addons-051972/config.json ...
	I1003 17:43:10.843424   13541 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 17:43:10.843473   13541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051972
	I1003 17:43:10.859727   13541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/addons-051972/id_rsa Username:docker}
	I1003 17:43:10.956720   13541 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 17:43:10.961112   13541 start.go:128] duration metric: took 16.5971805s to createHost
	I1003 17:43:10.961134   13541 start.go:83] releasing machines lock for "addons-051972", held for 16.597290781s
	I1003 17:43:10.961197   13541 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-051972
	I1003 17:43:10.978227   13541 ssh_runner.go:195] Run: cat /version.json
	I1003 17:43:10.978288   13541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051972
	I1003 17:43:10.978313   13541 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 17:43:10.978374   13541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051972
	I1003 17:43:10.996073   13541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/addons-051972/id_rsa Username:docker}
	I1003 17:43:10.997098   13541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/addons-051972/id_rsa Username:docker}
	I1003 17:43:11.144895   13541 ssh_runner.go:195] Run: systemctl --version
	I1003 17:43:11.151106   13541 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 17:43:11.183317   13541 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 17:43:11.187691   13541 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 17:43:11.187773   13541 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 17:43:11.212181   13541 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 17:43:11.212202   13541 start.go:495] detecting cgroup driver to use...
	I1003 17:43:11.212235   13541 detect.go:190] detected "systemd" cgroup driver on host os
	I1003 17:43:11.212280   13541 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 17:43:11.226874   13541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 17:43:11.238247   13541 docker.go:218] disabling cri-docker service (if available) ...
	I1003 17:43:11.238301   13541 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 17:43:11.253542   13541 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 17:43:11.269811   13541 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 17:43:11.346020   13541 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 17:43:11.424924   13541 docker.go:234] disabling docker service ...
	I1003 17:43:11.424997   13541 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 17:43:11.442299   13541 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 17:43:11.454293   13541 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 17:43:11.533456   13541 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 17:43:11.609245   13541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 17:43:11.621301   13541 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 17:43:11.634795   13541 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 17:43:11.634855   13541 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 17:43:11.644360   13541 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 17:43:11.644417   13541 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 17:43:11.652450   13541 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 17:43:11.660716   13541 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 17:43:11.668943   13541 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 17:43:11.676742   13541 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 17:43:11.684774   13541 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 17:43:11.697169   13541 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 17:43:11.705101   13541 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 17:43:11.711825   13541 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 17:43:11.711866   13541 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 17:43:11.723107   13541 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 17:43:11.730083   13541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 17:43:11.805489   13541 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 17:43:11.903873   13541 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 17:43:11.903961   13541 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 17:43:11.907876   13541 start.go:563] Will wait 60s for crictl version
	I1003 17:43:11.907937   13541 ssh_runner.go:195] Run: which crictl
	I1003 17:43:11.911172   13541 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 17:43:11.934148   13541 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 17:43:11.934270   13541 ssh_runner.go:195] Run: crio --version
	I1003 17:43:11.960810   13541 ssh_runner.go:195] Run: crio --version
	I1003 17:43:11.988426   13541 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 17:43:11.989615   13541 cli_runner.go:164] Run: docker network inspect addons-051972 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 17:43:12.006482   13541 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 17:43:12.010394   13541 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 17:43:12.019950   13541 kubeadm.go:883] updating cluster {Name:addons-051972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-051972 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 17:43:12.020084   13541 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 17:43:12.020137   13541 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 17:43:12.050789   13541 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 17:43:12.050807   13541 crio.go:433] Images already preloaded, skipping extraction
	I1003 17:43:12.050852   13541 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 17:43:12.074605   13541 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 17:43:12.074626   13541 cache_images.go:85] Images are preloaded, skipping loading
	I1003 17:43:12.074635   13541 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1003 17:43:12.074728   13541 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-051972 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-051972 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 17:43:12.074806   13541 ssh_runner.go:195] Run: crio config
	I1003 17:43:12.117685   13541 cni.go:84] Creating CNI manager for ""
	I1003 17:43:12.117709   13541 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 17:43:12.117726   13541 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 17:43:12.117746   13541 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-051972 NodeName:addons-051972 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 17:43:12.117862   13541 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-051972"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 17:43:12.117917   13541 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 17:43:12.126283   13541 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 17:43:12.126349   13541 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 17:43:12.133409   13541 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1003 17:43:12.144999   13541 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 17:43:12.159385   13541 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1003 17:43:12.171142   13541 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1003 17:43:12.174552   13541 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 17:43:12.184064   13541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 17:43:12.256238   13541 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 17:43:12.277005   13541 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/addons-051972 for IP: 192.168.49.2
	I1003 17:43:12.277030   13541 certs.go:195] generating shared ca certs ...
	I1003 17:43:12.277051   13541 certs.go:227] acquiring lock for ca certs: {Name:mk92d1e8e469cb44d9924ff8abf5ecf0a8ce4e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:43:12.277184   13541 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key
	I1003 17:43:12.635669   13541 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt ...
	I1003 17:43:12.635709   13541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt: {Name:mkfd022f4a5e3814e393ac22f6aadc926b2d3c8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:43:12.635895   13541 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key ...
	I1003 17:43:12.635907   13541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key: {Name:mk0059723aff3beca527d30edefddc54f2b9462b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:43:12.636003   13541 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key
	I1003 17:43:13.012584   13541 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt ...
	I1003 17:43:13.012616   13541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt: {Name:mkb82f0ccad4e0e685d4125c1f77b5f593917381 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:43:13.012778   13541 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key ...
	I1003 17:43:13.012789   13541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key: {Name:mk01b4e8c8ea61f6dea34e1b6414185e71f40731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:43:13.012859   13541 certs.go:257] generating profile certs ...
	I1003 17:43:13.012910   13541 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/addons-051972/client.key
	I1003 17:43:13.012924   13541 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/addons-051972/client.crt with IP's: []
	I1003 17:43:13.069174   13541 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/addons-051972/client.crt ...
	I1003 17:43:13.069209   13541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/addons-051972/client.crt: {Name:mk442594e42e93c0652e1f6393bd01b6b7d88159 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:43:13.069404   13541 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/addons-051972/client.key ...
	I1003 17:43:13.069414   13541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/addons-051972/client.key: {Name:mk933abfa838c9014b6baf84547be232dce15526 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:43:13.069511   13541 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/addons-051972/apiserver.key.8badd07d
	I1003 17:43:13.069533   13541 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/addons-051972/apiserver.crt.8badd07d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1003 17:43:13.387198   13541 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/addons-051972/apiserver.crt.8badd07d ...
	I1003 17:43:13.387229   13541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/addons-051972/apiserver.crt.8badd07d: {Name:mkcf60dc05fb1a1594548a09c7761960ffebe163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:43:13.387394   13541 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/addons-051972/apiserver.key.8badd07d ...
	I1003 17:43:13.387407   13541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/addons-051972/apiserver.key.8badd07d: {Name:mkf8f6070d136d7076c7ccedd118397d0b1686c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:43:13.387478   13541 certs.go:382] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/addons-051972/apiserver.crt.8badd07d -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/addons-051972/apiserver.crt
	I1003 17:43:13.387550   13541 certs.go:386] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/addons-051972/apiserver.key.8badd07d -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/addons-051972/apiserver.key
	I1003 17:43:13.387600   13541 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/addons-051972/proxy-client.key
	I1003 17:43:13.387624   13541 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/addons-051972/proxy-client.crt with IP's: []
	I1003 17:43:13.839806   13541 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/addons-051972/proxy-client.crt ...
	I1003 17:43:13.839833   13541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/addons-051972/proxy-client.crt: {Name:mkfa7469ec9465f4cfe9b7430f85679a1c6bbc6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:43:13.840000   13541 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/addons-051972/proxy-client.key ...
	I1003 17:43:13.840016   13541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/addons-051972/proxy-client.key: {Name:mk0500c4b03abdef74fb3b420d81e04ab94f0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:43:13.840205   13541 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 17:43:13.840244   13541 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem (1082 bytes)
	I1003 17:43:13.840264   13541 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem (1123 bytes)
	I1003 17:43:13.840291   13541 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem (1675 bytes)
	I1003 17:43:13.840885   13541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 17:43:13.857820   13541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 17:43:13.873839   13541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 17:43:13.889690   13541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 17:43:13.905303   13541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/addons-051972/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1003 17:43:13.921237   13541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/addons-051972/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1003 17:43:13.937082   13541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/addons-051972/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 17:43:13.952780   13541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/addons-051972/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1003 17:43:13.968578   13541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 17:43:13.986106   13541 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 17:43:13.997423   13541 ssh_runner.go:195] Run: openssl version
	I1003 17:43:14.003163   13541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 17:43:14.013533   13541 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 17:43:14.016999   13541 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 17:43:14.017053   13541 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 17:43:14.050096   13541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 17:43:14.058470   13541 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 17:43:14.061856   13541 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 17:43:14.061915   13541 kubeadm.go:400] StartCluster: {Name:addons-051972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-051972 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 17:43:14.061994   13541 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 17:43:14.062038   13541 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 17:43:14.087300   13541 cri.go:89] found id: ""
	I1003 17:43:14.087361   13541 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 17:43:14.094844   13541 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 17:43:14.102173   13541 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 17:43:14.102225   13541 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 17:43:14.109521   13541 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 17:43:14.109537   13541 kubeadm.go:157] found existing configuration files:
	
	I1003 17:43:14.109576   13541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 17:43:14.116689   13541 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 17:43:14.116742   13541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 17:43:14.123479   13541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 17:43:14.130288   13541 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 17:43:14.130327   13541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 17:43:14.136835   13541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 17:43:14.143748   13541 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 17:43:14.143791   13541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 17:43:14.150457   13541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 17:43:14.157934   13541 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 17:43:14.158067   13541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 17:43:14.164665   13541 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 17:43:14.219164   13541 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 17:43:14.273313   13541 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 17:47:19.655287   13541 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1003 17:47:19.655474   13541 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1003 17:47:19.657947   13541 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 17:47:19.658041   13541 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 17:47:19.658173   13541 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 17:47:19.658266   13541 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 17:47:19.658321   13541 kubeadm.go:318] OS: Linux
	I1003 17:47:19.658391   13541 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 17:47:19.658463   13541 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 17:47:19.658550   13541 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 17:47:19.658612   13541 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 17:47:19.658695   13541 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 17:47:19.658783   13541 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 17:47:19.658857   13541 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 17:47:19.658930   13541 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 17:47:19.659051   13541 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 17:47:19.659188   13541 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 17:47:19.659290   13541 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 17:47:19.659404   13541 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 17:47:19.661858   13541 out.go:252]   - Generating certificates and keys ...
	I1003 17:47:19.661947   13541 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 17:47:19.662040   13541 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 17:47:19.662147   13541 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 17:47:19.662236   13541 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1003 17:47:19.662328   13541 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1003 17:47:19.662373   13541 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1003 17:47:19.662418   13541 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1003 17:47:19.662537   13541 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-051972 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 17:47:19.662642   13541 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1003 17:47:19.662843   13541 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-051972 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 17:47:19.662927   13541 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 17:47:19.663023   13541 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 17:47:19.663108   13541 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1003 17:47:19.663181   13541 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 17:47:19.663272   13541 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 17:47:19.663321   13541 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 17:47:19.663368   13541 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 17:47:19.663460   13541 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 17:47:19.663515   13541 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 17:47:19.663581   13541 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 17:47:19.663639   13541 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 17:47:19.664853   13541 out.go:252]   - Booting up control plane ...
	I1003 17:47:19.664929   13541 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 17:47:19.665036   13541 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 17:47:19.665120   13541 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 17:47:19.665209   13541 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 17:47:19.665292   13541 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 17:47:19.665384   13541 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 17:47:19.665455   13541 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 17:47:19.665499   13541 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 17:47:19.665603   13541 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 17:47:19.665715   13541 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 17:47:19.665763   13541 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.00160732s
	I1003 17:47:19.665861   13541 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 17:47:19.665989   13541 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1003 17:47:19.666124   13541 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 17:47:19.666223   13541 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 17:47:19.666294   13541 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000229141s
	I1003 17:47:19.666377   13541 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000384848s
	I1003 17:47:19.666457   13541 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000321765s
	I1003 17:47:19.666463   13541 kubeadm.go:318] 
	I1003 17:47:19.666536   13541 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 17:47:19.666608   13541 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 17:47:19.666690   13541 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 17:47:19.666836   13541 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 17:47:19.666921   13541 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 17:47:19.667001   13541 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 17:47:19.667058   13541 kubeadm.go:318] 
	W1003 17:47:19.667144   13541 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [addons-051972 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [addons-051972 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.00160732s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000229141s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000384848s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000321765s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [addons-051972 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [addons-051972 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.00160732s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000229141s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000384848s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000321765s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1003 17:47:19.667220   13541 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1003 17:47:20.111381   13541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 17:47:20.124208   13541 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 17:47:20.124255   13541 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 17:47:20.132231   13541 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 17:47:20.132250   13541 kubeadm.go:157] found existing configuration files:
	
	I1003 17:47:20.132289   13541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 17:47:20.139470   13541 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 17:47:20.139522   13541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 17:47:20.146371   13541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 17:47:20.153263   13541 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 17:47:20.153305   13541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 17:47:20.160040   13541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 17:47:20.167023   13541 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 17:47:20.167076   13541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 17:47:20.173726   13541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 17:47:20.180744   13541 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 17:47:20.180807   13541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 17:47:20.187468   13541 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 17:47:20.221567   13541 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 17:47:20.221632   13541 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 17:47:20.240051   13541 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 17:47:20.240149   13541 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 17:47:20.240200   13541 kubeadm.go:318] OS: Linux
	I1003 17:47:20.240267   13541 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 17:47:20.240334   13541 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 17:47:20.240390   13541 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 17:47:20.240434   13541 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 17:47:20.240474   13541 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 17:47:20.240513   13541 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 17:47:20.240559   13541 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 17:47:20.240613   13541 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 17:47:20.296449   13541 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 17:47:20.296680   13541 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 17:47:20.296824   13541 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 17:47:20.303924   13541 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 17:47:20.310619   13541 out.go:252]   - Generating certificates and keys ...
	I1003 17:47:20.310720   13541 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 17:47:20.310799   13541 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 17:47:20.310897   13541 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1003 17:47:20.311016   13541 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1003 17:47:20.311088   13541 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1003 17:47:20.311135   13541 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1003 17:47:20.311186   13541 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1003 17:47:20.311234   13541 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1003 17:47:20.311326   13541 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1003 17:47:20.311430   13541 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1003 17:47:20.311487   13541 kubeadm.go:318] [certs] Using the existing "sa" key
	I1003 17:47:20.311542   13541 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 17:47:20.532958   13541 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 17:47:20.567673   13541 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 17:47:20.747514   13541 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 17:47:21.164637   13541 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 17:47:21.260906   13541 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 17:47:21.261397   13541 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 17:47:21.263616   13541 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 17:47:21.265396   13541 out.go:252]   - Booting up control plane ...
	I1003 17:47:21.265491   13541 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 17:47:21.265587   13541 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 17:47:21.267128   13541 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 17:47:21.280048   13541 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 17:47:21.280165   13541 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 17:47:21.286154   13541 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 17:47:21.286390   13541 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 17:47:21.286464   13541 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 17:47:21.382212   13541 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 17:47:21.382339   13541 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 17:47:21.883816   13541 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.710517ms
	I1003 17:47:21.886648   13541 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 17:47:21.886781   13541 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1003 17:47:21.886898   13541 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 17:47:21.887033   13541 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 17:51:21.887579   13541 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000180572s
	I1003 17:51:21.887867   13541 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000379525s
	I1003 17:51:21.888108   13541 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000439898s
	I1003 17:51:21.888148   13541 kubeadm.go:318] 
	I1003 17:51:21.888310   13541 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 17:51:21.888503   13541 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 17:51:21.888721   13541 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 17:51:21.888939   13541 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 17:51:21.889191   13541 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 17:51:21.889361   13541 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 17:51:21.889384   13541 kubeadm.go:318] 
	I1003 17:51:21.891018   13541 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 17:51:21.891144   13541 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 17:51:21.891641   13541 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1003 17:51:21.891700   13541 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1003 17:51:21.891764   13541 kubeadm.go:402] duration metric: took 8m7.829854306s to StartCluster
	I1003 17:51:21.891806   13541 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 17:51:21.891854   13541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 17:51:21.916939   13541 cri.go:89] found id: ""
	I1003 17:51:21.916986   13541 logs.go:282] 0 containers: []
	W1003 17:51:21.916999   13541 logs.go:284] No container was found matching "kube-apiserver"
	I1003 17:51:21.917009   13541 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 17:51:21.917062   13541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 17:51:21.943130   13541 cri.go:89] found id: ""
	I1003 17:51:21.943150   13541 logs.go:282] 0 containers: []
	W1003 17:51:21.943158   13541 logs.go:284] No container was found matching "etcd"
	I1003 17:51:21.943163   13541 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 17:51:21.943205   13541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 17:51:21.967953   13541 cri.go:89] found id: ""
	I1003 17:51:21.967994   13541 logs.go:282] 0 containers: []
	W1003 17:51:21.968008   13541 logs.go:284] No container was found matching "coredns"
	I1003 17:51:21.968018   13541 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 17:51:21.968073   13541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 17:51:21.992391   13541 cri.go:89] found id: ""
	I1003 17:51:21.992415   13541 logs.go:282] 0 containers: []
	W1003 17:51:21.992423   13541 logs.go:284] No container was found matching "kube-scheduler"
	I1003 17:51:21.992431   13541 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 17:51:21.992490   13541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 17:51:22.018530   13541 cri.go:89] found id: ""
	I1003 17:51:22.018554   13541 logs.go:282] 0 containers: []
	W1003 17:51:22.018562   13541 logs.go:284] No container was found matching "kube-proxy"
	I1003 17:51:22.018568   13541 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 17:51:22.018633   13541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 17:51:22.042074   13541 cri.go:89] found id: ""
	I1003 17:51:22.042103   13541 logs.go:282] 0 containers: []
	W1003 17:51:22.042111   13541 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 17:51:22.042120   13541 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 17:51:22.042171   13541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 17:51:22.065790   13541 cri.go:89] found id: ""
	I1003 17:51:22.065817   13541 logs.go:282] 0 containers: []
	W1003 17:51:22.065828   13541 logs.go:284] No container was found matching "kindnet"
	I1003 17:51:22.065839   13541 logs.go:123] Gathering logs for kubelet ...
	I1003 17:51:22.065853   13541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 17:51:22.131505   13541 logs.go:123] Gathering logs for dmesg ...
	I1003 17:51:22.131538   13541 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 17:51:22.142269   13541 logs.go:123] Gathering logs for describe nodes ...
	I1003 17:51:22.142295   13541 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 17:51:22.196419   13541 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 17:51:22.189886    2390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 17:51:22.190404    2390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 17:51:22.191911    2390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 17:51:22.192349    2390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 17:51:22.193787    2390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 17:51:22.189886    2390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 17:51:22.190404    2390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 17:51:22.191911    2390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 17:51:22.192349    2390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 17:51:22.193787    2390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 17:51:22.196446   13541 logs.go:123] Gathering logs for CRI-O ...
	I1003 17:51:22.196458   13541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 17:51:22.255262   13541 logs.go:123] Gathering logs for container status ...
	I1003 17:51:22.255300   13541 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1003 17:51:22.282353   13541 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.710517ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000180572s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000379525s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000439898s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1003 17:51:22.282408   13541 out.go:285] * 
	* 
	W1003 17:51:22.282466   13541 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.710517ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000180572s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000379525s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000439898s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.710517ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000180572s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000379525s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000439898s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 17:51:22.282480   13541 out.go:285] * 
	* 
	W1003 17:51:22.284273   13541 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 17:51:22.287851   13541 out.go:203] 
	W1003 17:51:22.288950   13541 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.710517ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000180572s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000379525s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000439898s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.710517ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000180572s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000379525s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000439898s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 17:51:22.288972   13541 out.go:285] * 
	* 
	I1003 17:51:22.290346   13541 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:110: out/minikube-linux-amd64 start -p addons-051972 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher failed: exit status 80
--- FAIL: TestAddons/Setup (520.49s)

                                                
                                    
x
+
TestErrorSpam/setup (496.39s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-093146 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-093146 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p nospam-093146 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-093146 --driver=docker  --container-runtime=crio: exit status 80 (8m16.375169479s)

                                                
                                                
-- stdout --
	* [nospam-093146] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21625
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "nospam-093146" primary control-plane node in "nospam-093146" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost nospam-093146] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost nospam-093146] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.904148ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000744768s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000970178s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001159826s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.579591ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000051955s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000099686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000069868s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.579591ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000051955s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000099686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000069868s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-linux-amd64 start -p nospam-093146 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-093146 --driver=docker  --container-runtime=crio" failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "[init] Using Kubernetes version: v1.34.1"
error_spam_test.go:96: unexpected stderr: "[preflight] Running pre-flight checks"
error_spam_test.go:96: unexpected stderr: "[preflight] The system verification failed. Printing the output from the verification:"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mKERNEL_VERSION\x1b[0m: \x1b[0;32m6.8.0-1041-gcp\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mOS\x1b[0m: \x1b[0;32mLinux\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPU\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPUSET\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_DEVICES\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_FREEZER\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_MEMORY\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_PIDS\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_HUGETLB\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_IO\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "[preflight] Pulling images required for setting up a Kubernetes cluster"
error_spam_test.go:96: unexpected stderr: "[preflight] This might take a minute or two, depending on the speed of your internet connection"
error_spam_test.go:96: unexpected stderr: "[preflight] You can also perform this action beforehand using 'kubeadm config images pull'"
error_spam_test.go:96: unexpected stderr: "[certs] Using certificateDir folder \"/var/lib/minikube/certs\""
error_spam_test.go:96: unexpected stderr: "[certs] Using existing ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"apiserver-kubelet-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"front-proxy-ca\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"front-proxy-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/ca\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/server\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] etcd/server serving cert is signed for DNS names [localhost nospam-093146] and IPs [192.168.49.2 127.0.0.1 ::1]"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/peer\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] etcd/peer serving cert is signed for DNS names [localhost nospam-093146] and IPs [192.168.49.2 127.0.0.1 ::1]"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/healthcheck-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"apiserver-etcd-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"sa\" key and public key"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\""
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"super-admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"kubelet.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-apiserver\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-controller-manager\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-scheduler\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\""
error_spam_test.go:96: unexpected stderr: "[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Starting the kubelet"
error_spam_test.go:96: unexpected stderr: "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[kubelet-check] The kubelet is healthy after 501.904148ms"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-controller-manager is not healthy after 4m0.000744768s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-apiserver is not healthy after 4m0.000970178s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-scheduler is not healthy after 4m0.001159826s"
error_spam_test.go:96: unexpected stderr: "A control plane component may have crashed or exited when started by the container runtime."
error_spam_test.go:96: unexpected stderr: "To troubleshoot, list all containers using your preferred container runtimes CLI."
error_spam_test.go:96: unexpected stderr: "Here is one example how you may list all running Kubernetes containers by using crictl:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'"
error_spam_test.go:96: unexpected stderr: "\tOnce you have found the failing container, you can inspect its logs with:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1"
error_spam_test.go:96: unexpected stderr: "\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'"
error_spam_test.go:96: unexpected stderr: "error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused]"
error_spam_test.go:96: unexpected stderr: "To see the stack trace of this error execute with --v=5 or higher"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "X Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "[init] Using Kubernetes version: v1.34.1"
error_spam_test.go:96: unexpected stderr: "[preflight] Running pre-flight checks"
error_spam_test.go:96: unexpected stderr: "[preflight] The system verification failed. Printing the output from the verification:"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mKERNEL_VERSION\x1b[0m: \x1b[0;32m6.8.0-1041-gcp\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mOS\x1b[0m: \x1b[0;32mLinux\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPU\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPUSET\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_DEVICES\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_FREEZER\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_MEMORY\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_PIDS\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_HUGETLB\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_IO\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "[preflight] Pulling images required for setting up a Kubernetes cluster"
error_spam_test.go:96: unexpected stderr: "[preflight] This might take a minute or two, depending on the speed of your internet connection"
error_spam_test.go:96: unexpected stderr: "[preflight] You can also perform this action beforehand using 'kubeadm config images pull'"
error_spam_test.go:96: unexpected stderr: "[certs] Using certificateDir folder \"/var/lib/minikube/certs\""
error_spam_test.go:96: unexpected stderr: "[certs] Using existing ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-kubelet-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/server certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/peer certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/healthcheck-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-etcd-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using the existing \"sa\" key"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\""
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"super-admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"kubelet.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-apiserver\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-controller-manager\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-scheduler\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\""
error_spam_test.go:96: unexpected stderr: "[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Starting the kubelet"
error_spam_test.go:96: unexpected stderr: "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[kubelet-check] The kubelet is healthy after 501.579591ms"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-apiserver is not healthy after 4m0.000051955s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-scheduler is not healthy after 4m0.000099686s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-controller-manager is not healthy after 4m0.000069868s"
error_spam_test.go:96: unexpected stderr: "A control plane component may have crashed or exited when started by the container runtime."
error_spam_test.go:96: unexpected stderr: "To troubleshoot, list all containers using your preferred container runtimes CLI."
error_spam_test.go:96: unexpected stderr: "Here is one example how you may list all running Kubernetes containers by using crictl:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'"
error_spam_test.go:96: unexpected stderr: "\tOnce you have found the failing container, you can inspect its logs with:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1"
error_spam_test.go:96: unexpected stderr: "\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'"
error_spam_test.go:96: unexpected stderr: "error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]"
error_spam_test.go:96: unexpected stderr: "To see the stack trace of this error execute with --v=5 or higher"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "[init] Using Kubernetes version: v1.34.1"
error_spam_test.go:96: unexpected stderr: "[preflight] Running pre-flight checks"
error_spam_test.go:96: unexpected stderr: "[preflight] The system verification failed. Printing the output from the verification:"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mKERNEL_VERSION\x1b[0m: \x1b[0;32m6.8.0-1041-gcp\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mOS\x1b[0m: \x1b[0;32mLinux\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPU\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPUSET\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_DEVICES\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_FREEZER\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_MEMORY\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_PIDS\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_HUGETLB\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_IO\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "[preflight] Pulling images required for setting up a Kubernetes cluster"
error_spam_test.go:96: unexpected stderr: "[preflight] This might take a minute or two, depending on the speed of your internet connection"
error_spam_test.go:96: unexpected stderr: "[preflight] You can also perform this action beforehand using 'kubeadm config images pull'"
error_spam_test.go:96: unexpected stderr: "[certs] Using certificateDir folder \"/var/lib/minikube/certs\""
error_spam_test.go:96: unexpected stderr: "[certs] Using existing ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-kubelet-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/server certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/peer certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/healthcheck-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-etcd-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using the existing \"sa\" key"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\""
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"super-admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"kubelet.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-apiserver\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-controller-manager\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-scheduler\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\""
error_spam_test.go:96: unexpected stderr: "[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Starting the kubelet"
error_spam_test.go:96: unexpected stderr: "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[kubelet-check] The kubelet is healthy after 501.579591ms"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-apiserver is not healthy after 4m0.000051955s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-scheduler is not healthy after 4m0.000099686s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-controller-manager is not healthy after 4m0.000069868s"
error_spam_test.go:96: unexpected stderr: "A control plane component may have crashed or exited when started by the container runtime."
error_spam_test.go:96: unexpected stderr: "To troubleshoot, list all containers using your preferred container runtimes CLI."
error_spam_test.go:96: unexpected stderr: "Here is one example how you may list all running Kubernetes containers by using crictl:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'"
error_spam_test.go:96: unexpected stderr: "\tOnce you have found the failing container, you can inspect its logs with:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1"
error_spam_test.go:96: unexpected stderr: "\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'"
error_spam_test.go:96: unexpected stderr: "error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]"
error_spam_test.go:96: unexpected stderr: "To see the stack trace of this error execute with --v=5 or higher"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:110: minikube stdout:
* [nospam-093146] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
- MINIKUBE_LOCATION=21625
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "nospam-093146" primary control-plane node in "nospam-093146" cluster
* Pulling base image v0.0.48-1759382731-21643 ...
* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 6.8.0-1041-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_IO: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost nospam-093146] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost nospam-093146] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.904148ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-controller-manager is not healthy after 4m0.000744768s
[control-plane-check] kube-apiserver is not healthy after 4m0.000970178s
[control-plane-check] kube-scheduler is not healthy after 4m0.001159826s

                                                
                                                
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'

                                                
                                                

                                                
                                                
stderr:
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
* 
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 6.8.0-1041-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_IO: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.579591ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-apiserver is not healthy after 4m0.000051955s
[control-plane-check] kube-scheduler is not healthy after 4m0.000099686s
[control-plane-check] kube-controller-manager is not healthy after 4m0.000069868s

                                                
                                                
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'

                                                
                                                

                                                
                                                
stderr:
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 6.8.0-1041-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_IO: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.579591ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-apiserver is not healthy after 4m0.000051955s
[control-plane-check] kube-scheduler is not healthy after 4m0.000099686s
[control-plane-check] kube-controller-manager is not healthy after 4m0.000069868s

                                                
                                                
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'

                                                
                                                

                                                
                                                
stderr:
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
* 
--- FAIL: TestErrorSpam/setup (496.39s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (499.5s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-889240 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-889240 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: exit status 80 (8m18.234579127s)

                                                
                                                
-- stdout --
	* [functional-889240] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21625
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-889240" primary control-plane node in "functional-889240" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Found network options:
	  - HTTP_PROXY=localhost:43153
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:43153 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-889240 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-889240 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001898169s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000013558s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000028101s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000447455s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.923476ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000040257s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000160452s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000156433s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.923476ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000040257s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000160452s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000156433s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-amd64 start -p functional-889240 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-889240
helpers_test.go:243: (dbg) docker inspect functional-889240:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a",
	        "Created": "2025-10-03T17:59:56.619817507Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 26766,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T17:59:56.652603806Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/hostname",
	        "HostsPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/hosts",
	        "LogPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a-json.log",
	        "Name": "/functional-889240",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-889240:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-889240",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a",
	                "LowerDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2-init/diff:/var/lib/docker/overlay2/6a517a7375440eba803d7b83fe1e0821915758396dd4d8556ab64fff322a60c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-889240",
	                "Source": "/var/lib/docker/volumes/functional-889240/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-889240",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-889240",
	                "name.minikube.sigs.k8s.io": "functional-889240",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "da15d31dc23bdd4694ae9e3b61015d7ce0d61668c73d3e386422834c6f0321d8",
	            "SandboxKey": "/var/run/docker/netns/da15d31dc23b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-889240": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:9e:1d:e9:d9:ce",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "03281bed183d0817c0bc237b5c25093fc10222138aedde4c7deef5823759fa24",
	                    "EndpointID": "28fa584fdd6e253816ae08a2460ef02b91085c8a7996d55008876e3bd65bbc7e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-889240",
	                        "9f4f0f10b4a9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-889240 -n functional-889240
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-889240 -n functional-889240: exit status 6 (291.930465ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:08:10.051505   31208 status.go:458] kubeconfig endpoint: get endpoint: "functional-889240" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 logs -n 25
helpers_test.go:260: TestFunctional/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-903573                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-903573   │ jenkins │ v1.37.0 │ 03 Oct 25 17:42 UTC │ 03 Oct 25 17:42 UTC │
	│ delete  │ -p download-only-455553                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-455553   │ jenkins │ v1.37.0 │ 03 Oct 25 17:42 UTC │ 03 Oct 25 17:42 UTC │
	│ start   │ --download-only -p download-docker-423289 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-423289 │ jenkins │ v1.37.0 │ 03 Oct 25 17:42 UTC │                     │
	│ delete  │ -p download-docker-423289                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-423289 │ jenkins │ v1.37.0 │ 03 Oct 25 17:42 UTC │ 03 Oct 25 17:42 UTC │
	│ start   │ --download-only -p binary-mirror-626924 --alsologtostderr --binary-mirror http://127.0.0.1:44037 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-626924   │ jenkins │ v1.37.0 │ 03 Oct 25 17:42 UTC │                     │
	│ delete  │ -p binary-mirror-626924                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-626924   │ jenkins │ v1.37.0 │ 03 Oct 25 17:42 UTC │ 03 Oct 25 17:42 UTC │
	│ addons  │ disable dashboard -p addons-051972                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-051972          │ jenkins │ v1.37.0 │ 03 Oct 25 17:42 UTC │                     │
	│ addons  │ enable dashboard -p addons-051972                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-051972          │ jenkins │ v1.37.0 │ 03 Oct 25 17:42 UTC │                     │
	│ start   │ -p addons-051972 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-051972          │ jenkins │ v1.37.0 │ 03 Oct 25 17:42 UTC │                     │
	│ delete  │ -p addons-051972                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-051972          │ jenkins │ v1.37.0 │ 03 Oct 25 17:51 UTC │ 03 Oct 25 17:51 UTC │
	│ start   │ -p nospam-093146 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-093146 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                  │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:51 UTC │                     │
	│ start   │ nospam-093146 --log_dir /tmp/nospam-093146 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │                     │
	│ start   │ nospam-093146 --log_dir /tmp/nospam-093146 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │                     │
	│ start   │ nospam-093146 --log_dir /tmp/nospam-093146 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │                     │
	│ pause   │ nospam-093146 --log_dir /tmp/nospam-093146 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ pause   │ nospam-093146 --log_dir /tmp/nospam-093146 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ pause   │ nospam-093146 --log_dir /tmp/nospam-093146 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ unpause │ nospam-093146 --log_dir /tmp/nospam-093146 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ unpause │ nospam-093146 --log_dir /tmp/nospam-093146 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ unpause │ nospam-093146 --log_dir /tmp/nospam-093146 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ stop    │ nospam-093146 --log_dir /tmp/nospam-093146 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ stop    │ nospam-093146 --log_dir /tmp/nospam-093146 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ stop    │ nospam-093146 --log_dir /tmp/nospam-093146 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ delete  │ -p nospam-093146                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ start   │ -p functional-889240 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                            │ functional-889240      │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 17:59:51
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 17:59:51.566260   26197 out.go:360] Setting OutFile to fd 1 ...
	I1003 17:59:51.566535   26197 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 17:59:51.566539   26197 out.go:374] Setting ErrFile to fd 2...
	I1003 17:59:51.566542   26197 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 17:59:51.566724   26197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 17:59:51.567224   26197 out.go:368] Setting JSON to false
	I1003 17:59:51.568150   26197 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2543,"bootTime":1759511849,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 17:59:51.568229   26197 start.go:140] virtualization: kvm guest
	I1003 17:59:51.570469   26197 out.go:179] * [functional-889240] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 17:59:51.571699   26197 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 17:59:51.571719   26197 notify.go:220] Checking for updates...
	I1003 17:59:51.574112   26197 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:59:51.575447   26197 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 17:59:51.576710   26197 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 17:59:51.577913   26197 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 17:59:51.579172   26197 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:59:51.580768   26197 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 17:59:51.604601   26197 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 17:59:51.604700   26197 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 17:59:51.661184   26197 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 17:59:51.650206392 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 17:59:51.661309   26197 docker.go:318] overlay module found
	I1003 17:59:51.663151   26197 out.go:179] * Using the docker driver based on user configuration
	I1003 17:59:51.664361   26197 start.go:304] selected driver: docker
	I1003 17:59:51.664367   26197 start.go:924] validating driver "docker" against <nil>
	I1003 17:59:51.664376   26197 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:59:51.664916   26197 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 17:59:51.720017   26197 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 17:59:51.710901114 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 17:59:51.720169   26197 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1003 17:59:51.720392   26197 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 17:59:51.722454   26197 out.go:179] * Using Docker driver with root privileges
	I1003 17:59:51.723587   26197 cni.go:84] Creating CNI manager for ""
	I1003 17:59:51.723637   26197 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 17:59:51.723643   26197 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 17:59:51.723706   26197 start.go:348] cluster config:
	{Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1003 17:59:51.724860   26197 out.go:179] * Starting "functional-889240" primary control-plane node in "functional-889240" cluster
	I1003 17:59:51.725927   26197 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 17:59:51.727064   26197 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 17:59:51.728101   26197 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 17:59:51.728125   26197 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 17:59:51.728130   26197 cache.go:58] Caching tarball of preloaded images
	I1003 17:59:51.728196   26197 preload.go:233] Found /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1003 17:59:51.728197   26197 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 17:59:51.728202   26197 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 17:59:51.728496   26197 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/config.json ...
	I1003 17:59:51.728512   26197 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/config.json: {Name:mkbb59c66436a58e826ce5321f2e893fe0e06329 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:59:51.748746   26197 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 17:59:51.748754   26197 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 17:59:51.748768   26197 cache.go:232] Successfully downloaded all kic artifacts
	I1003 17:59:51.748796   26197 start.go:360] acquireMachinesLock for functional-889240: {Name:mk6750a9fb1c1c3747b0abf2aebe2a2d0047ae3a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:59:51.748884   26197 start.go:364] duration metric: took 77.164µs to acquireMachinesLock for "functional-889240"
	I1003 17:59:51.748901   26197 start.go:93] Provisioning new machine with config: &{Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 17:59:51.748956   26197 start.go:125] createHost starting for "" (driver="docker")
	I1003 17:59:51.751446   26197 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1003 17:59:51.751678   26197 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:43153 to docker env.
	I1003 17:59:51.751704   26197 start.go:159] libmachine.API.Create for "functional-889240" (driver="docker")
	I1003 17:59:51.751721   26197 client.go:168] LocalClient.Create starting
	I1003 17:59:51.751792   26197 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem
	I1003 17:59:51.751824   26197 main.go:141] libmachine: Decoding PEM data...
	I1003 17:59:51.751836   26197 main.go:141] libmachine: Parsing certificate...
	I1003 17:59:51.751882   26197 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem
	I1003 17:59:51.751896   26197 main.go:141] libmachine: Decoding PEM data...
	I1003 17:59:51.751902   26197 main.go:141] libmachine: Parsing certificate...
	I1003 17:59:51.752245   26197 cli_runner.go:164] Run: docker network inspect functional-889240 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 17:59:51.768470   26197 cli_runner.go:211] docker network inspect functional-889240 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 17:59:51.768530   26197 network_create.go:284] running [docker network inspect functional-889240] to gather additional debugging logs...
	I1003 17:59:51.768543   26197 cli_runner.go:164] Run: docker network inspect functional-889240
	W1003 17:59:51.785576   26197 cli_runner.go:211] docker network inspect functional-889240 returned with exit code 1
	I1003 17:59:51.785597   26197 network_create.go:287] error running [docker network inspect functional-889240]: docker network inspect functional-889240: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-889240 not found
	I1003 17:59:51.785611   26197 network_create.go:289] output of [docker network inspect functional-889240]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-889240 not found
	
	** /stderr **
	I1003 17:59:51.785753   26197 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 17:59:51.803036   26197 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00158cc40}
	I1003 17:59:51.803080   26197 network_create.go:124] attempt to create docker network functional-889240 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1003 17:59:51.803122   26197 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-889240 functional-889240
	I1003 17:59:51.857810   26197 network_create.go:108] docker network functional-889240 192.168.49.0/24 created
	I1003 17:59:51.857856   26197 kic.go:121] calculated static IP "192.168.49.2" for the "functional-889240" container
	I1003 17:59:51.857910   26197 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 17:59:51.874381   26197 cli_runner.go:164] Run: docker volume create functional-889240 --label name.minikube.sigs.k8s.io=functional-889240 --label created_by.minikube.sigs.k8s.io=true
	I1003 17:59:51.892022   26197 oci.go:103] Successfully created a docker volume functional-889240
	I1003 17:59:51.892095   26197 cli_runner.go:164] Run: docker run --rm --name functional-889240-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-889240 --entrypoint /usr/bin/test -v functional-889240:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1003 17:59:52.274728   26197 oci.go:107] Successfully prepared a docker volume functional-889240
	I1003 17:59:52.274759   26197 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 17:59:52.274777   26197 kic.go:194] Starting extracting preloaded images to volume ...
	I1003 17:59:52.274856   26197 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v functional-889240:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1003 17:59:56.554758   26197 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v functional-889240:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.279869609s)
	I1003 17:59:56.554779   26197 kic.go:203] duration metric: took 4.279997772s to extract preloaded images to volume ...
	W1003 17:59:56.554889   26197 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1003 17:59:56.554921   26197 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1003 17:59:56.554955   26197 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1003 17:59:56.604871   26197 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-889240 --name functional-889240 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-889240 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-889240 --network functional-889240 --ip 192.168.49.2 --volume functional-889240:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1003 17:59:56.858731   26197 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Running}}
	I1003 17:59:56.877087   26197 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
	I1003 17:59:56.895505   26197 cli_runner.go:164] Run: docker exec functional-889240 stat /var/lib/dpkg/alternatives/iptables
	I1003 17:59:56.941762   26197 oci.go:144] the created container "functional-889240" has a running status.
	I1003 17:59:56.941791   26197 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa...
	I1003 17:59:57.064512   26197 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1003 17:59:57.092284   26197 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
	I1003 17:59:57.114089   26197 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1003 17:59:57.114103   26197 kic_runner.go:114] Args: [docker exec --privileged functional-889240 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1003 17:59:57.159365   26197 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
	I1003 17:59:57.184463   26197 machine.go:93] provisionDockerMachine start ...
	I1003 17:59:57.184576   26197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 17:59:57.204890   26197 main.go:141] libmachine: Using SSH client type: native
	I1003 17:59:57.205199   26197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 17:59:57.205208   26197 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 17:59:57.351934   26197 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-889240
	
	I1003 17:59:57.351951   26197 ubuntu.go:182] provisioning hostname "functional-889240"
	I1003 17:59:57.352058   26197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 17:59:57.371083   26197 main.go:141] libmachine: Using SSH client type: native
	I1003 17:59:57.371338   26197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 17:59:57.371351   26197 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-889240 && echo "functional-889240" | sudo tee /etc/hostname
	I1003 17:59:57.525686   26197 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-889240
	
	I1003 17:59:57.525749   26197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 17:59:57.543500   26197 main.go:141] libmachine: Using SSH client type: native
	I1003 17:59:57.543714   26197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 17:59:57.543726   26197 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-889240' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-889240/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-889240' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 17:59:57.686918   26197 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 17:59:57.686936   26197 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8669/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8669/.minikube}
	I1003 17:59:57.686956   26197 ubuntu.go:190] setting up certificates
	I1003 17:59:57.686963   26197 provision.go:84] configureAuth start
	I1003 17:59:57.687032   26197 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-889240
	I1003 17:59:57.703841   26197 provision.go:143] copyHostCerts
	I1003 17:59:57.703890   26197 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem, removing ...
	I1003 17:59:57.703896   26197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 17:59:57.703963   26197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem (1675 bytes)
	I1003 17:59:57.704081   26197 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem, removing ...
	I1003 17:59:57.704085   26197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 17:59:57.704112   26197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem (1082 bytes)
	I1003 17:59:57.704178   26197 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem, removing ...
	I1003 17:59:57.704181   26197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 17:59:57.704203   26197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem (1123 bytes)
	I1003 17:59:57.704259   26197 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem org=jenkins.functional-889240 san=[127.0.0.1 192.168.49.2 functional-889240 localhost minikube]
	I1003 17:59:57.875058   26197 provision.go:177] copyRemoteCerts
	I1003 17:59:57.875109   26197 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 17:59:57.875150   26197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 17:59:57.892634   26197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 17:59:57.992995   26197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 17:59:58.011543   26197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 17:59:58.028296   26197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1003 17:59:58.045064   26197 provision.go:87] duration metric: took 358.088454ms to configureAuth
	I1003 17:59:58.045086   26197 ubuntu.go:206] setting minikube options for container-runtime
	I1003 17:59:58.045256   26197 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 17:59:58.045350   26197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 17:59:58.062306   26197 main.go:141] libmachine: Using SSH client type: native
	I1003 17:59:58.062498   26197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 17:59:58.062507   26197 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 17:59:58.312461   26197 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 17:59:58.312473   26197 machine.go:96] duration metric: took 1.127997603s to provisionDockerMachine
	I1003 17:59:58.312482   26197 client.go:171] duration metric: took 6.560756656s to LocalClient.Create
	I1003 17:59:58.312498   26197 start.go:167] duration metric: took 6.560794861s to libmachine.API.Create "functional-889240"
	I1003 17:59:58.312504   26197 start.go:293] postStartSetup for "functional-889240" (driver="docker")
	I1003 17:59:58.312511   26197 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 17:59:58.312552   26197 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 17:59:58.312595   26197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 17:59:58.330233   26197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 17:59:58.433915   26197 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 17:59:58.437178   26197 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 17:59:58.437195   26197 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 17:59:58.437204   26197 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/addons for local assets ...
	I1003 17:59:58.437247   26197 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/files for local assets ...
	I1003 17:59:58.437325   26197 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> 122122.pem in /etc/ssl/certs
	I1003 17:59:58.437394   26197 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/test/nested/copy/12212/hosts -> hosts in /etc/test/nested/copy/12212
	I1003 17:59:58.437423   26197 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/12212
	I1003 17:59:58.445069   26197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /etc/ssl/certs/122122.pem (1708 bytes)
	I1003 17:59:58.465308   26197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/test/nested/copy/12212/hosts --> /etc/test/nested/copy/12212/hosts (40 bytes)
	I1003 17:59:58.481968   26197 start.go:296] duration metric: took 169.449134ms for postStartSetup
	I1003 17:59:58.482374   26197 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-889240
	I1003 17:59:58.500121   26197 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/config.json ...
	I1003 17:59:58.500342   26197 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 17:59:58.500379   26197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 17:59:58.518761   26197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 17:59:58.616809   26197 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 17:59:58.621100   26197 start.go:128] duration metric: took 6.872134036s to createHost
	I1003 17:59:58.621115   26197 start.go:83] releasing machines lock for "functional-889240", held for 6.872224711s
	I1003 17:59:58.621169   26197 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-889240
	I1003 17:59:58.640592   26197 out.go:179] * Found network options:
	I1003 17:59:58.642238   26197 out.go:179]   - HTTP_PROXY=localhost:43153
	W1003 17:59:58.643480   26197 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1003 17:59:58.644632   26197 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1003 17:59:58.645901   26197 ssh_runner.go:195] Run: cat /version.json
	I1003 17:59:58.645942   26197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 17:59:58.646007   26197 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 17:59:58.646053   26197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 17:59:58.664378   26197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 17:59:58.664742   26197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 17:59:58.814242   26197 ssh_runner.go:195] Run: systemctl --version
	I1003 17:59:58.820391   26197 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 17:59:58.854878   26197 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 17:59:58.859391   26197 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 17:59:58.859454   26197 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 17:59:58.883957   26197 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 17:59:58.883969   26197 start.go:495] detecting cgroup driver to use...
	I1003 17:59:58.884022   26197 detect.go:190] detected "systemd" cgroup driver on host os
	I1003 17:59:58.884059   26197 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 17:59:58.899463   26197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 17:59:58.911725   26197 docker.go:218] disabling cri-docker service (if available) ...
	I1003 17:59:58.911768   26197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 17:59:58.927792   26197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 17:59:58.944718   26197 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 17:59:59.022951   26197 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 17:59:59.106733   26197 docker.go:234] disabling docker service ...
	I1003 17:59:59.106792   26197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 17:59:59.124117   26197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 17:59:59.136238   26197 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 17:59:59.217461   26197 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 17:59:59.296952   26197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 17:59:59.309095   26197 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 17:59:59.322561   26197 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 17:59:59.322602   26197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 17:59:59.332452   26197 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 17:59:59.332495   26197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 17:59:59.341062   26197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 17:59:59.349525   26197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 17:59:59.357774   26197 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 17:59:59.365378   26197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 17:59:59.373340   26197 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 17:59:59.385898   26197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 17:59:59.394491   26197 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 17:59:59.401694   26197 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 17:59:59.408954   26197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 17:59:59.488344   26197 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 17:59:59.593755   26197 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 17:59:59.593801   26197 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 17:59:59.597565   26197 start.go:563] Will wait 60s for crictl version
	I1003 17:59:59.597683   26197 ssh_runner.go:195] Run: which crictl
	I1003 17:59:59.601138   26197 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 17:59:59.625228   26197 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 17:59:59.625295   26197 ssh_runner.go:195] Run: crio --version
	I1003 17:59:59.651449   26197 ssh_runner.go:195] Run: crio --version
	I1003 17:59:59.679810   26197 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 17:59:59.680973   26197 cli_runner.go:164] Run: docker network inspect functional-889240 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 17:59:59.698162   26197 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 17:59:59.702355   26197 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 17:59:59.712672   26197 kubeadm.go:883] updating cluster {Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 17:59:59.712784   26197 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 17:59:59.712822   26197 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 17:59:59.742521   26197 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 17:59:59.742532   26197 crio.go:433] Images already preloaded, skipping extraction
	I1003 17:59:59.742573   26197 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 17:59:59.766667   26197 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 17:59:59.766679   26197 cache_images.go:85] Images are preloaded, skipping loading
	I1003 17:59:59.766685   26197 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1003 17:59:59.766775   26197 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-889240 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 17:59:59.766829   26197 ssh_runner.go:195] Run: crio config
	I1003 17:59:59.811393   26197 cni.go:84] Creating CNI manager for ""
	I1003 17:59:59.811402   26197 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 17:59:59.811413   26197 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 17:59:59.811433   26197 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-889240 NodeName:functional-889240 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 17:59:59.811545   26197 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-889240"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 17:59:59.811598   26197 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 17:59:59.819296   26197 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 17:59:59.819342   26197 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 17:59:59.826435   26197 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1003 17:59:59.838202   26197 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 17:59:59.852970   26197 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1003 17:59:59.865047   26197 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1003 17:59:59.868468   26197 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 17:59:59.877610   26197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 17:59:59.957175   26197 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 17:59:59.981277   26197 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240 for IP: 192.168.49.2
	I1003 17:59:59.981288   26197 certs.go:195] generating shared ca certs ...
	I1003 17:59:59.981301   26197 certs.go:227] acquiring lock for ca certs: {Name:mk92d1e8e469cb44d9924ff8abf5ecf0a8ce4e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:59:59.981434   26197 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key
	I1003 17:59:59.981472   26197 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key
	I1003 17:59:59.981479   26197 certs.go:257] generating profile certs ...
	I1003 17:59:59.981534   26197 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.key
	I1003 17:59:59.981547   26197 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt with IP's: []
	I1003 18:00:00.675134   26197 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt ...
	I1003 18:00:00.675151   26197 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt: {Name:mk97d0e631126b0d90c7cb7fdd2d1000dda69da0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:00:00.675346   26197 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.key ...
	I1003 18:00:00.675352   26197 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.key: {Name:mk8c3aa792c5164db09ce5d123a6c926e8a4eb17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:00:00.675432   26197 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.key.eb3f8f7c
	I1003 18:00:00.675441   26197 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.crt.eb3f8f7c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1003 18:00:00.691507   26197 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.crt.eb3f8f7c ...
	I1003 18:00:00.691523   26197 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.crt.eb3f8f7c: {Name:mkba7e78127a4def185ef955f1414bb991f8ea18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:00:00.691690   26197 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.key.eb3f8f7c ...
	I1003 18:00:00.691698   26197 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.key.eb3f8f7c: {Name:mka0191640a8c28c1a4ea32b4ad65c3326e96b6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:00:00.691768   26197 certs.go:382] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.crt.eb3f8f7c -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.crt
	I1003 18:00:00.691837   26197 certs.go:386] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.key.eb3f8f7c -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.key
	I1003 18:00:00.691884   26197 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.key
	I1003 18:00:00.691896   26197 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.crt with IP's: []
	I1003 18:00:01.277721   26197 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.crt ...
	I1003 18:00:01.277738   26197 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.crt: {Name:mk6c93ad04480d483bebfce6635202bbcfe4b221 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:00:01.277932   26197 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.key ...
	I1003 18:00:01.277940   26197 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.key: {Name:mkb6b7449d39dcd5e46bff25129de4d2692260d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:00:01.278131   26197 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem (1338 bytes)
	W1003 18:00:01.278165   26197 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212_empty.pem, impossibly tiny 0 bytes
	I1003 18:00:01.278171   26197 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:00:01.278192   26197 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:00:01.278220   26197 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:00:01.278237   26197 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem (1675 bytes)
	I1003 18:00:01.278269   26197 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:00:01.278861   26197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:00:01.297885   26197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 18:00:01.315892   26197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:00:01.334271   26197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:00:01.351839   26197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 18:00:01.369352   26197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:00:01.386478   26197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:00:01.403449   26197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 18:00:01.420448   26197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:00:01.439335   26197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem --> /usr/share/ca-certificates/12212.pem (1338 bytes)
	I1003 18:00:01.457183   26197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /usr/share/ca-certificates/122122.pem (1708 bytes)
	I1003 18:00:01.474504   26197 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:00:01.486805   26197 ssh_runner.go:195] Run: openssl version
	I1003 18:00:01.492922   26197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122122.pem && ln -fs /usr/share/ca-certificates/122122.pem /etc/ssl/certs/122122.pem"
	I1003 18:00:01.501434   26197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122122.pem
	I1003 18:00:01.505252   26197 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:00:01.505307   26197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122122.pem
	I1003 18:00:01.539623   26197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122122.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:00:01.548415   26197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:00:01.557432   26197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:00:01.561829   26197 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:00:01.561884   26197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:00:01.605736   26197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:00:01.614727   26197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12212.pem && ln -fs /usr/share/ca-certificates/12212.pem /etc/ssl/certs/12212.pem"
	I1003 18:00:01.623276   26197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12212.pem
	I1003 18:00:01.627090   26197 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:00:01.627137   26197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12212.pem
	I1003 18:00:01.661318   26197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12212.pem /etc/ssl/certs/51391683.0"
	I1003 18:00:01.670351   26197 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:00:01.673937   26197 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 18:00:01.674004   26197 kubeadm.go:400] StartCluster: {Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:00:01.674062   26197 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:00:01.674108   26197 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:00:01.700559   26197 cri.go:89] found id: ""
	I1003 18:00:01.700640   26197 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:00:01.709763   26197 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 18:00:01.718651   26197 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:00:01.718704   26197 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:00:01.726647   26197 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:00:01.726660   26197 kubeadm.go:157] found existing configuration files:
	
	I1003 18:00:01.726707   26197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1003 18:00:01.734306   26197 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:00:01.734358   26197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:00:01.741854   26197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1003 18:00:01.749679   26197 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:00:01.749734   26197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:00:01.757405   26197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1003 18:00:01.765128   26197 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:00:01.765171   26197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:00:01.772902   26197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1003 18:00:01.780825   26197 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:00:01.780872   26197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:00:01.788393   26197 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:00:01.847351   26197 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 18:00:01.905491   26197 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:04:06.624506   26197 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1003 18:04:06.624620   26197 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1003 18:04:06.627222   26197 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:04:06.627264   26197 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:04:06.627337   26197 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:04:06.627388   26197 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 18:04:06.627415   26197 kubeadm.go:318] OS: Linux
	I1003 18:04:06.627452   26197 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:04:06.627490   26197 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:04:06.627528   26197 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:04:06.627567   26197 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:04:06.627614   26197 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:04:06.627655   26197 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:04:06.627694   26197 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:04:06.627728   26197 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 18:04:06.627809   26197 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:04:06.627888   26197 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:04:06.627970   26197 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:04:06.628040   26197 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:04:06.630380   26197 out.go:252]   - Generating certificates and keys ...
	I1003 18:04:06.630449   26197 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:04:06.630528   26197 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:04:06.630589   26197 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 18:04:06.630635   26197 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1003 18:04:06.630691   26197 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1003 18:04:06.630742   26197 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1003 18:04:06.630786   26197 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1003 18:04:06.630895   26197 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [functional-889240 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 18:04:06.630941   26197 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1003 18:04:06.631075   26197 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [functional-889240 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 18:04:06.631125   26197 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 18:04:06.631188   26197 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 18:04:06.631227   26197 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1003 18:04:06.631277   26197 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:04:06.631318   26197 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:04:06.631380   26197 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:04:06.631437   26197 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:04:06.631514   26197 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:04:06.631558   26197 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:04:06.631632   26197 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:04:06.631691   26197 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:04:06.633798   26197 out.go:252]   - Booting up control plane ...
	I1003 18:04:06.633872   26197 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:04:06.633948   26197 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:04:06.634024   26197 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:04:06.634109   26197 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:04:06.634205   26197 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:04:06.634298   26197 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:04:06.634374   26197 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:04:06.634411   26197 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:04:06.634532   26197 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:04:06.634623   26197 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:04:06.634667   26197 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001898169s
	I1003 18:04:06.634746   26197 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:04:06.634810   26197 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1003 18:04:06.634876   26197 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:04:06.634945   26197 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:04:06.635025   26197 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000013558s
	I1003 18:04:06.635093   26197 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000028101s
	I1003 18:04:06.635152   26197 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000447455s
	I1003 18:04:06.635155   26197 kubeadm.go:318] 
	I1003 18:04:06.635228   26197 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 18:04:06.635305   26197 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:04:06.635377   26197 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 18:04:06.635462   26197 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 18:04:06.635527   26197 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 18:04:06.635591   26197 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 18:04:06.635619   26197 kubeadm.go:318] 
	W1003 18:04:06.635726   26197 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-889240 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-889240 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001898169s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000013558s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000028101s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000447455s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1003 18:04:06.635798   26197 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1003 18:04:07.076793   26197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:04:07.089108   26197 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:04:07.089150   26197 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:04:07.096720   26197 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:04:07.096729   26197 kubeadm.go:157] found existing configuration files:
	
	I1003 18:04:07.096763   26197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1003 18:04:07.103814   26197 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:04:07.103855   26197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:04:07.110767   26197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1003 18:04:07.117765   26197 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:04:07.117808   26197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:04:07.125151   26197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1003 18:04:07.132438   26197 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:04:07.132475   26197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:04:07.139357   26197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1003 18:04:07.146424   26197 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:04:07.146468   26197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:04:07.153121   26197 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:04:07.186667   26197 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:04:07.186728   26197 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:04:07.205531   26197 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:04:07.205585   26197 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 18:04:07.205623   26197 kubeadm.go:318] OS: Linux
	I1003 18:04:07.205671   26197 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:04:07.205721   26197 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:04:07.205803   26197 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:04:07.205847   26197 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:04:07.205882   26197 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:04:07.205934   26197 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:04:07.206030   26197 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:04:07.206072   26197 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 18:04:07.260328   26197 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:04:07.260494   26197 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:04:07.260621   26197 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:04:07.267272   26197 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:04:07.270960   26197 out.go:252]   - Generating certificates and keys ...
	I1003 18:04:07.271036   26197 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:04:07.271123   26197 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:04:07.271230   26197 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1003 18:04:07.271285   26197 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1003 18:04:07.271371   26197 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1003 18:04:07.271449   26197 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1003 18:04:07.271551   26197 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1003 18:04:07.271601   26197 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1003 18:04:07.271659   26197 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1003 18:04:07.271714   26197 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1003 18:04:07.271742   26197 kubeadm.go:318] [certs] Using the existing "sa" key
	I1003 18:04:07.271788   26197 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:04:07.338318   26197 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:04:07.901319   26197 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:04:08.239431   26197 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:04:08.298323   26197 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:04:08.702144   26197 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:04:08.702527   26197 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:04:08.705673   26197 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:04:08.707485   26197 out.go:252]   - Booting up control plane ...
	I1003 18:04:08.707600   26197 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:04:08.707685   26197 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:04:08.707755   26197 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:04:08.720357   26197 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:04:08.720461   26197 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:04:08.727044   26197 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:04:08.727162   26197 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:04:08.727196   26197 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:04:08.826441   26197 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:04:08.826587   26197 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:04:09.327298   26197 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.923476ms
	I1003 18:04:09.330182   26197 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:04:09.330281   26197 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1003 18:04:09.330354   26197 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:04:09.330466   26197 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:08:09.330535   26197 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000040257s
	I1003 18:08:09.330645   26197 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000160452s
	I1003 18:08:09.330784   26197 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000156433s
	I1003 18:08:09.330801   26197 kubeadm.go:318] 
	I1003 18:08:09.330903   26197 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 18:08:09.331005   26197 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:08:09.331086   26197 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 18:08:09.331204   26197 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 18:08:09.331305   26197 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 18:08:09.331435   26197 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 18:08:09.331443   26197 kubeadm.go:318] 
	I1003 18:08:09.334272   26197 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 18:08:09.334367   26197 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:08:09.334902   26197 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused]
	I1003 18:08:09.334959   26197 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1003 18:08:09.335056   26197 kubeadm.go:402] duration metric: took 8m7.661056136s to StartCluster
	I1003 18:08:09.335101   26197 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:08:09.335157   26197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:08:09.360659   26197 cri.go:89] found id: ""
	I1003 18:08:09.360688   26197 logs.go:282] 0 containers: []
	W1003 18:08:09.360703   26197 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:08:09.360709   26197 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:08:09.360760   26197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:08:09.385691   26197 cri.go:89] found id: ""
	I1003 18:08:09.385704   26197 logs.go:282] 0 containers: []
	W1003 18:08:09.385710   26197 logs.go:284] No container was found matching "etcd"
	I1003 18:08:09.385715   26197 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:08:09.385761   26197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:08:09.409277   26197 cri.go:89] found id: ""
	I1003 18:08:09.409296   26197 logs.go:282] 0 containers: []
	W1003 18:08:09.409304   26197 logs.go:284] No container was found matching "coredns"
	I1003 18:08:09.409309   26197 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:08:09.409355   26197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:08:09.434107   26197 cri.go:89] found id: ""
	I1003 18:08:09.434127   26197 logs.go:282] 0 containers: []
	W1003 18:08:09.434136   26197 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:08:09.434144   26197 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:08:09.434209   26197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:08:09.458577   26197 cri.go:89] found id: ""
	I1003 18:08:09.458595   26197 logs.go:282] 0 containers: []
	W1003 18:08:09.458604   26197 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:08:09.458610   26197 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:08:09.458666   26197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:08:09.483051   26197 cri.go:89] found id: ""
	I1003 18:08:09.483067   26197 logs.go:282] 0 containers: []
	W1003 18:08:09.483076   26197 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:08:09.483082   26197 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:08:09.483141   26197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:08:09.507238   26197 cri.go:89] found id: ""
	I1003 18:08:09.507255   26197 logs.go:282] 0 containers: []
	W1003 18:08:09.507265   26197 logs.go:284] No container was found matching "kindnet"
	I1003 18:08:09.507273   26197 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:08:09.507283   26197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:08:09.564145   26197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:08:09.557370    2427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:08:09.557908    2427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:08:09.559476    2427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:08:09.559872    2427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:08:09.561418    2427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:08:09.557370    2427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:08:09.557908    2427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:08:09.559476    2427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:08:09.559872    2427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:08:09.561418    2427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:08:09.564162   26197 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:08:09.564174   26197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:08:09.626999   26197 logs.go:123] Gathering logs for container status ...
	I1003 18:08:09.627018   26197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:08:09.654581   26197 logs.go:123] Gathering logs for kubelet ...
	I1003 18:08:09.654597   26197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:08:09.722410   26197 logs.go:123] Gathering logs for dmesg ...
	I1003 18:08:09.722428   26197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1003 18:08:09.733459   26197 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.923476ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000040257s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000160452s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000156433s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1003 18:08:09.733516   26197 out.go:285] * 
	W1003 18:08:09.733580   26197 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.923476ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000040257s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000160452s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000156433s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:08:09.733590   26197 out.go:285] * 
	W1003 18:08:09.735289   26197 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:08:09.738997   26197 out.go:203] 
	W1003 18:08:09.740347   26197 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.923476ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000040257s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000160452s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000156433s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:08:09.740376   26197 out.go:285] * 
	I1003 18:08:09.742327   26197 out.go:203] 
	
	
	==> CRI-O <==
	Oct 03 18:08:02 functional-889240 crio[789]: time="2025-10-03T18:08:02.234077769Z" level=info msg="createCtr: removing container 238dfea085c02b6dd1ccd4140e4e1b11be6b56953c4a0d26b75e5392c2a22647" id=a6e038a2-8b1c-4c24-81ec-a2cd52fd6c67 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:08:02 functional-889240 crio[789]: time="2025-10-03T18:08:02.234102979Z" level=info msg="createCtr: deleting container 238dfea085c02b6dd1ccd4140e4e1b11be6b56953c4a0d26b75e5392c2a22647 from storage" id=a6e038a2-8b1c-4c24-81ec-a2cd52fd6c67 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:08:02 functional-889240 crio[789]: time="2025-10-03T18:08:02.236111309Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-889240_kube-system_a73daf0147d5280c6db538ca59db9fe0_0" id=a6e038a2-8b1c-4c24-81ec-a2cd52fd6c67 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:08:06 functional-889240 crio[789]: time="2025-10-03T18:08:06.211798285Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=525abeac-743f-4bc5-9e38-91c74a56ee19 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:08:06 functional-889240 crio[789]: time="2025-10-03T18:08:06.21263662Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=6389daf3-7a7a-4b4d-8ecc-c79a8630ae74 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:08:06 functional-889240 crio[789]: time="2025-10-03T18:08:06.21455838Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-889240/kube-controller-manager" id=dd76de77-e09c-4e39-bc50-248ffdb7e694 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:08:06 functional-889240 crio[789]: time="2025-10-03T18:08:06.214788944Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:08:06 functional-889240 crio[789]: time="2025-10-03T18:08:06.218124816Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:08:06 functional-889240 crio[789]: time="2025-10-03T18:08:06.21850601Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:08:06 functional-889240 crio[789]: time="2025-10-03T18:08:06.232036338Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=dd76de77-e09c-4e39-bc50-248ffdb7e694 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:08:06 functional-889240 crio[789]: time="2025-10-03T18:08:06.233420982Z" level=info msg="createCtr: deleting container ID 939b8c492fec6ad63e47490121f1751bbf1f6c6329ca005d340768306749daa2 from idIndex" id=dd76de77-e09c-4e39-bc50-248ffdb7e694 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:08:06 functional-889240 crio[789]: time="2025-10-03T18:08:06.233455318Z" level=info msg="createCtr: removing container 939b8c492fec6ad63e47490121f1751bbf1f6c6329ca005d340768306749daa2" id=dd76de77-e09c-4e39-bc50-248ffdb7e694 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:08:06 functional-889240 crio[789]: time="2025-10-03T18:08:06.233482096Z" level=info msg="createCtr: deleting container 939b8c492fec6ad63e47490121f1751bbf1f6c6329ca005d340768306749daa2 from storage" id=dd76de77-e09c-4e39-bc50-248ffdb7e694 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:08:06 functional-889240 crio[789]: time="2025-10-03T18:08:06.235382348Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-889240_kube-system_7e715cb6024854d45a9fa99576167e43_0" id=dd76de77-e09c-4e39-bc50-248ffdb7e694 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:08:08 functional-889240 crio[789]: time="2025-10-03T18:08:08.211937908Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=9336de31-cd9f-4435-9cd4-170944d7747c name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:08:08 functional-889240 crio[789]: time="2025-10-03T18:08:08.212780587Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=e001d3a0-bab5-4f61-8549-2705d2405289 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:08:08 functional-889240 crio[789]: time="2025-10-03T18:08:08.213631506Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-889240/kube-scheduler" id=2ff7bf1e-d358-49e3-a5bb-3444e8514b71 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:08:08 functional-889240 crio[789]: time="2025-10-03T18:08:08.213855796Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:08:08 functional-889240 crio[789]: time="2025-10-03T18:08:08.217264825Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:08:08 functional-889240 crio[789]: time="2025-10-03T18:08:08.217677881Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:08:08 functional-889240 crio[789]: time="2025-10-03T18:08:08.232936807Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=2ff7bf1e-d358-49e3-a5bb-3444e8514b71 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:08:08 functional-889240 crio[789]: time="2025-10-03T18:08:08.23428317Z" level=info msg="createCtr: deleting container ID b97f2d3e5d2c99313e7db7644ce2eca660597a531703e73fe2bd8da75148ed70 from idIndex" id=2ff7bf1e-d358-49e3-a5bb-3444e8514b71 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:08:08 functional-889240 crio[789]: time="2025-10-03T18:08:08.234313382Z" level=info msg="createCtr: removing container b97f2d3e5d2c99313e7db7644ce2eca660597a531703e73fe2bd8da75148ed70" id=2ff7bf1e-d358-49e3-a5bb-3444e8514b71 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:08:08 functional-889240 crio[789]: time="2025-10-03T18:08:08.23434076Z" level=info msg="createCtr: deleting container b97f2d3e5d2c99313e7db7644ce2eca660597a531703e73fe2bd8da75148ed70 from storage" id=2ff7bf1e-d358-49e3-a5bb-3444e8514b71 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:08:08 functional-889240 crio[789]: time="2025-10-03T18:08:08.236538187Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-889240_kube-system_7dadd1df42d6a2c3d1907f134f7d5ea7_0" id=2ff7bf1e-d358-49e3-a5bb-3444e8514b71 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:08:10.615287    2583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:08:10.615824    2583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:08:10.617367    2583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:08:10.617743    2583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:08:10.619245    2583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 3 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001870] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.374530] i8042: Warning: Keylock active
	[  +0.010846] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003424] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000658] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000637] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000691] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.479345] block sda: the capability attribute has been deprecated.
	[  +0.086934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025583] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.992810] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:08:10 up 50 min,  0 user,  load average: 0.08, 0.04, 0.06
	Linux functional-889240 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 03 18:08:02 functional-889240 kubelet[1817]:         container etcd start failed in pod etcd-functional-889240_kube-system(a73daf0147d5280c6db538ca59db9fe0): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:08:02 functional-889240 kubelet[1817]:  > logger="UnhandledError"
	Oct 03 18:08:02 functional-889240 kubelet[1817]: E1003 18:08:02.236514    1817 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-889240" podUID="a73daf0147d5280c6db538ca59db9fe0"
	Oct 03 18:08:05 functional-889240 kubelet[1817]: E1003 18:08:05.835278    1817 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-889240?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 03 18:08:05 functional-889240 kubelet[1817]: I1003 18:08:05.986928    1817 kubelet_node_status.go:75] "Attempting to register node" node="functional-889240"
	Oct 03 18:08:05 functional-889240 kubelet[1817]: E1003 18:08:05.987308    1817 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-889240"
	Oct 03 18:08:06 functional-889240 kubelet[1817]: E1003 18:08:06.211418    1817 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-889240\" not found" node="functional-889240"
	Oct 03 18:08:06 functional-889240 kubelet[1817]: E1003 18:08:06.235657    1817 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:08:06 functional-889240 kubelet[1817]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:08:06 functional-889240 kubelet[1817]:  > podSandboxID="65835069a3bb03e380bb50149082d0338f4c2642bf6aea8dacf1e0715b6f21c8"
	Oct 03 18:08:06 functional-889240 kubelet[1817]: E1003 18:08:06.235741    1817 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:08:06 functional-889240 kubelet[1817]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-889240_kube-system(7e715cb6024854d45a9fa99576167e43): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:08:06 functional-889240 kubelet[1817]:  > logger="UnhandledError"
	Oct 03 18:08:06 functional-889240 kubelet[1817]: E1003 18:08:06.235766    1817 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-889240" podUID="7e715cb6024854d45a9fa99576167e43"
	Oct 03 18:08:08 functional-889240 kubelet[1817]: E1003 18:08:08.211529    1817 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-889240\" not found" node="functional-889240"
	Oct 03 18:08:08 functional-889240 kubelet[1817]: E1003 18:08:08.236836    1817 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:08:08 functional-889240 kubelet[1817]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:08:08 functional-889240 kubelet[1817]:  > podSandboxID="9ea0d784c2fd12bcd1db05033ba2964baa15be14deeae00b6508f924c37e3473"
	Oct 03 18:08:08 functional-889240 kubelet[1817]: E1003 18:08:08.236930    1817 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:08:08 functional-889240 kubelet[1817]:         container kube-scheduler start failed in pod kube-scheduler-functional-889240_kube-system(7dadd1df42d6a2c3d1907f134f7d5ea7): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:08:08 functional-889240 kubelet[1817]:  > logger="UnhandledError"
	Oct 03 18:08:08 functional-889240 kubelet[1817]: E1003 18:08:08.236962    1817 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-889240" podUID="7dadd1df42d6a2c3d1907f134f7d5ea7"
	Oct 03 18:08:08 functional-889240 kubelet[1817]: E1003 18:08:08.591681    1817 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-889240.186b0d404ae58a04  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-889240,UID:functional-889240,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-889240 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-889240,},FirstTimestamp:2025-10-03 18:04:09.203935748 +0000 UTC m=+0.376858749,LastTimestamp:2025-10-03 18:04:09.203935748 +0000 UTC m=+0.376858749,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-889240,}"
	Oct 03 18:08:09 functional-889240 kubelet[1817]: E1003 18:08:09.224846    1817 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-889240\" not found"
	Oct 03 18:08:09 functional-889240 kubelet[1817]: E1003 18:08:09.864427    1817 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-889240 -n functional-889240
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-889240 -n functional-889240: exit status 6 (303.816495ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:08:11.001587   31535 status.go:458] kubeconfig endpoint: get endpoint: "functional-889240" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "functional-889240" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (499.50s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (366.08s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1003 18:08:11.016422   12212 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-889240 --alsologtostderr -v=8
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-889240 --alsologtostderr -v=8: exit status 80 (6m3.682426431s)

                                                
                                                
-- stdout --
	* [functional-889240] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21625
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-889240" primary control-plane node in "functional-889240" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:08:11.068231   31648 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:08:11.068486   31648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:08:11.068496   31648 out.go:374] Setting ErrFile to fd 2...
	I1003 18:08:11.068502   31648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:08:11.068729   31648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:08:11.069215   31648 out.go:368] Setting JSON to false
	I1003 18:08:11.070085   31648 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3042,"bootTime":1759511849,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:08:11.070168   31648 start.go:140] virtualization: kvm guest
	I1003 18:08:11.073397   31648 out.go:179] * [functional-889240] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 18:08:11.074567   31648 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:08:11.074571   31648 notify.go:220] Checking for updates...
	I1003 18:08:11.077123   31648 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:08:11.078380   31648 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:08:11.079542   31648 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:08:11.080665   31648 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:08:11.081754   31648 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:08:11.083246   31648 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:08:11.083337   31648 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:08:11.109195   31648 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:08:11.109276   31648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:08:11.161161   31648 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-03 18:08:11.151693527 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:08:11.161260   31648 docker.go:318] overlay module found
	I1003 18:08:11.162933   31648 out.go:179] * Using the docker driver based on existing profile
	I1003 18:08:11.164103   31648 start.go:304] selected driver: docker
	I1003 18:08:11.164115   31648 start.go:924] validating driver "docker" against &{Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:08:11.164183   31648 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:08:11.164266   31648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:08:11.217384   31648 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-03 18:08:11.207171248 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:08:11.218094   31648 cni.go:84] Creating CNI manager for ""
	I1003 18:08:11.218156   31648 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 18:08:11.218200   31648 start.go:348] cluster config:
	{Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:08:11.220110   31648 out.go:179] * Starting "functional-889240" primary control-plane node in "functional-889240" cluster
	I1003 18:08:11.221257   31648 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:08:11.222336   31648 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:08:11.223595   31648 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:08:11.223644   31648 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 18:08:11.223654   31648 cache.go:58] Caching tarball of preloaded images
	I1003 18:08:11.223686   31648 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:08:11.223758   31648 preload.go:233] Found /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1003 18:08:11.223772   31648 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 18:08:11.223859   31648 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/config.json ...
	I1003 18:08:11.242913   31648 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 18:08:11.242930   31648 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 18:08:11.242946   31648 cache.go:232] Successfully downloaded all kic artifacts
	I1003 18:08:11.242988   31648 start.go:360] acquireMachinesLock for functional-889240: {Name:mk6750a9fb1c1c3747b0abf2aebe2a2d0047ae3a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:08:11.243063   31648 start.go:364] duration metric: took 50.516µs to acquireMachinesLock for "functional-889240"
	I1003 18:08:11.243090   31648 start.go:96] Skipping create...Using existing machine configuration
	I1003 18:08:11.243097   31648 fix.go:54] fixHost starting: 
	I1003 18:08:11.243298   31648 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
	I1003 18:08:11.259925   31648 fix.go:112] recreateIfNeeded on functional-889240: state=Running err=<nil>
	W1003 18:08:11.259951   31648 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 18:08:11.261699   31648 out.go:252] * Updating the running docker "functional-889240" container ...
	I1003 18:08:11.261731   31648 machine.go:93] provisionDockerMachine start ...
	I1003 18:08:11.261806   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:11.278828   31648 main.go:141] libmachine: Using SSH client type: native
	I1003 18:08:11.279109   31648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:08:11.279121   31648 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 18:08:11.421621   31648 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-889240
	
	I1003 18:08:11.421642   31648 ubuntu.go:182] provisioning hostname "functional-889240"
	I1003 18:08:11.421693   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:11.439154   31648 main.go:141] libmachine: Using SSH client type: native
	I1003 18:08:11.439372   31648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:08:11.439384   31648 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-889240 && echo "functional-889240" | sudo tee /etc/hostname
	I1003 18:08:11.590164   31648 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-889240
	
	I1003 18:08:11.590238   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:11.607612   31648 main.go:141] libmachine: Using SSH client type: native
	I1003 18:08:11.607822   31648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:08:11.607839   31648 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-889240' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-889240/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-889240' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:08:11.750385   31648 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:08:11.750412   31648 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8669/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8669/.minikube}
	I1003 18:08:11.750443   31648 ubuntu.go:190] setting up certificates
	I1003 18:08:11.750454   31648 provision.go:84] configureAuth start
	I1003 18:08:11.750512   31648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-889240
	I1003 18:08:11.767416   31648 provision.go:143] copyHostCerts
	I1003 18:08:11.767453   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:08:11.767484   31648 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem, removing ...
	I1003 18:08:11.767498   31648 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:08:11.767564   31648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem (1082 bytes)
	I1003 18:08:11.767659   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:08:11.767679   31648 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem, removing ...
	I1003 18:08:11.767686   31648 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:08:11.767714   31648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem (1123 bytes)
	I1003 18:08:11.767934   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:08:11.768183   31648 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem, removing ...
	I1003 18:08:11.768200   31648 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:08:11.768251   31648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem (1675 bytes)
	I1003 18:08:11.768350   31648 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem org=jenkins.functional-889240 san=[127.0.0.1 192.168.49.2 functional-889240 localhost minikube]
	I1003 18:08:11.920440   31648 provision.go:177] copyRemoteCerts
	I1003 18:08:11.920514   31648 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:08:11.920551   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:11.938061   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:12.037875   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 18:08:12.037937   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1003 18:08:12.054720   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 18:08:12.054773   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 18:08:12.071055   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 18:08:12.071110   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 18:08:12.087547   31648 provision.go:87] duration metric: took 337.079976ms to configureAuth
	I1003 18:08:12.087574   31648 ubuntu.go:206] setting minikube options for container-runtime
	I1003 18:08:12.087766   31648 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:08:12.087867   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:12.105048   31648 main.go:141] libmachine: Using SSH client type: native
	I1003 18:08:12.105289   31648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:08:12.105305   31648 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 18:08:12.366340   31648 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 18:08:12.366367   31648 machine.go:96] duration metric: took 1.104629442s to provisionDockerMachine
	I1003 18:08:12.366377   31648 start.go:293] postStartSetup for "functional-889240" (driver="docker")
	I1003 18:08:12.366388   31648 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:08:12.366431   31648 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:08:12.366476   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:12.383468   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:12.483988   31648 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:08:12.487264   31648 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1003 18:08:12.487282   31648 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1003 18:08:12.487289   31648 command_runner.go:130] > VERSION_ID="12"
	I1003 18:08:12.487295   31648 command_runner.go:130] > VERSION="12 (bookworm)"
	I1003 18:08:12.487301   31648 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1003 18:08:12.487306   31648 command_runner.go:130] > ID=debian
	I1003 18:08:12.487313   31648 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1003 18:08:12.487320   31648 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1003 18:08:12.487329   31648 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1003 18:08:12.487402   31648 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:08:12.487425   31648 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 18:08:12.487438   31648 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/addons for local assets ...
	I1003 18:08:12.487491   31648 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/files for local assets ...
	I1003 18:08:12.487581   31648 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> 122122.pem in /etc/ssl/certs
	I1003 18:08:12.487593   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /etc/ssl/certs/122122.pem
	I1003 18:08:12.487688   31648 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/test/nested/copy/12212/hosts -> hosts in /etc/test/nested/copy/12212
	I1003 18:08:12.487697   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/test/nested/copy/12212/hosts -> /etc/test/nested/copy/12212/hosts
	I1003 18:08:12.487740   31648 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/12212
	I1003 18:08:12.495127   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:08:12.511597   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/test/nested/copy/12212/hosts --> /etc/test/nested/copy/12212/hosts (40 bytes)
	I1003 18:08:12.528571   31648 start.go:296] duration metric: took 162.180752ms for postStartSetup
	I1003 18:08:12.528647   31648 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:08:12.528710   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:12.546258   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:12.643641   31648 command_runner.go:130] > 39%
	I1003 18:08:12.643858   31648 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:08:12.648017   31648 command_runner.go:130] > 179G
	I1003 18:08:12.648284   31648 fix.go:56] duration metric: took 1.405183874s for fixHost
	I1003 18:08:12.648303   31648 start.go:83] releasing machines lock for "functional-889240", held for 1.405223544s
	I1003 18:08:12.648364   31648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-889240
	I1003 18:08:12.665548   31648 ssh_runner.go:195] Run: cat /version.json
	I1003 18:08:12.665589   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:12.665627   31648 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:08:12.665684   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:12.683771   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:12.684037   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:12.833728   31648 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1003 18:08:12.833784   31648 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759382731-21643", "minikube_version": "v1.37.0", "commit": "b0c70dd4d342e6443a02916e52d246d8cdb181c4"}
	I1003 18:08:12.833903   31648 ssh_runner.go:195] Run: systemctl --version
	I1003 18:08:12.840008   31648 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1003 18:08:12.840056   31648 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1003 18:08:12.840282   31648 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 18:08:12.874135   31648 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1003 18:08:12.878285   31648 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1003 18:08:12.878575   31648 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 18:08:12.878637   31648 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 18:08:12.886227   31648 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1003 18:08:12.886250   31648 start.go:495] detecting cgroup driver to use...
	I1003 18:08:12.886282   31648 detect.go:190] detected "systemd" cgroup driver on host os
	I1003 18:08:12.886327   31648 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 18:08:12.900106   31648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 18:08:12.911429   31648 docker.go:218] disabling cri-docker service (if available) ...
	I1003 18:08:12.911477   31648 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 18:08:12.925289   31648 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 18:08:12.936739   31648 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 18:08:13.020667   31648 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 18:08:13.102263   31648 docker.go:234] disabling docker service ...
	I1003 18:08:13.102328   31648 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 18:08:13.115759   31648 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 18:08:13.127581   31648 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 18:08:13.208801   31648 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 18:08:13.298232   31648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 18:08:13.314511   31648 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:08:13.327949   31648 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1003 18:08:13.328859   31648 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 18:08:13.328914   31648 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.337658   31648 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 18:08:13.337709   31648 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.346162   31648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.354712   31648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.363098   31648 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:08:13.370793   31648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.378940   31648 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.386700   31648 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.394938   31648 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:08:13.401467   31648 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1003 18:08:13.402164   31648 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:08:13.409040   31648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:08:13.496423   31648 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 18:08:13.599891   31648 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 18:08:13.599956   31648 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 18:08:13.603739   31648 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1003 18:08:13.603760   31648 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1003 18:08:13.603769   31648 command_runner.go:130] > Device: 0,59	Inode: 3868        Links: 1
	I1003 18:08:13.603779   31648 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1003 18:08:13.603787   31648 command_runner.go:130] > Access: 2025-10-03 18:08:13.582699245 +0000
	I1003 18:08:13.603796   31648 command_runner.go:130] > Modify: 2025-10-03 18:08:13.582699245 +0000
	I1003 18:08:13.603806   31648 command_runner.go:130] > Change: 2025-10-03 18:08:13.582699245 +0000
	I1003 18:08:13.603811   31648 command_runner.go:130] >  Birth: 2025-10-03 18:08:13.582699245 +0000
	I1003 18:08:13.603837   31648 start.go:563] Will wait 60s for crictl version
	I1003 18:08:13.603884   31648 ssh_runner.go:195] Run: which crictl
	I1003 18:08:13.607403   31648 command_runner.go:130] > /usr/local/bin/crictl
	I1003 18:08:13.607458   31648 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 18:08:13.630641   31648 command_runner.go:130] > Version:  0.1.0
	I1003 18:08:13.630667   31648 command_runner.go:130] > RuntimeName:  cri-o
	I1003 18:08:13.630673   31648 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1003 18:08:13.630680   31648 command_runner.go:130] > RuntimeApiVersion:  v1
	I1003 18:08:13.630699   31648 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 18:08:13.630764   31648 ssh_runner.go:195] Run: crio --version
	I1003 18:08:13.656303   31648 command_runner.go:130] > crio version 1.34.1
	I1003 18:08:13.656324   31648 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1003 18:08:13.656329   31648 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1003 18:08:13.656339   31648 command_runner.go:130] >    GitTreeState:   dirty
	I1003 18:08:13.656344   31648 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1003 18:08:13.656348   31648 command_runner.go:130] >    GoVersion:      go1.24.6
	I1003 18:08:13.656352   31648 command_runner.go:130] >    Compiler:       gc
	I1003 18:08:13.656365   31648 command_runner.go:130] >    Platform:       linux/amd64
	I1003 18:08:13.656372   31648 command_runner.go:130] >    Linkmode:       static
	I1003 18:08:13.656378   31648 command_runner.go:130] >    BuildTags:
	I1003 18:08:13.656383   31648 command_runner.go:130] >      static
	I1003 18:08:13.656387   31648 command_runner.go:130] >      netgo
	I1003 18:08:13.656393   31648 command_runner.go:130] >      osusergo
	I1003 18:08:13.656396   31648 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1003 18:08:13.656402   31648 command_runner.go:130] >      seccomp
	I1003 18:08:13.656405   31648 command_runner.go:130] >      apparmor
	I1003 18:08:13.656410   31648 command_runner.go:130] >      selinux
	I1003 18:08:13.656415   31648 command_runner.go:130] >    LDFlags:          unknown
	I1003 18:08:13.656421   31648 command_runner.go:130] >    SeccompEnabled:   true
	I1003 18:08:13.656426   31648 command_runner.go:130] >    AppArmorEnabled:  false
	I1003 18:08:13.657588   31648 ssh_runner.go:195] Run: crio --version
	I1003 18:08:13.682656   31648 command_runner.go:130] > crio version 1.34.1
	I1003 18:08:13.682693   31648 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1003 18:08:13.682698   31648 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1003 18:08:13.682703   31648 command_runner.go:130] >    GitTreeState:   dirty
	I1003 18:08:13.682708   31648 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1003 18:08:13.682712   31648 command_runner.go:130] >    GoVersion:      go1.24.6
	I1003 18:08:13.682716   31648 command_runner.go:130] >    Compiler:       gc
	I1003 18:08:13.682720   31648 command_runner.go:130] >    Platform:       linux/amd64
	I1003 18:08:13.682724   31648 command_runner.go:130] >    Linkmode:       static
	I1003 18:08:13.682728   31648 command_runner.go:130] >    BuildTags:
	I1003 18:08:13.682733   31648 command_runner.go:130] >      static
	I1003 18:08:13.682737   31648 command_runner.go:130] >      netgo
	I1003 18:08:13.682741   31648 command_runner.go:130] >      osusergo
	I1003 18:08:13.682746   31648 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1003 18:08:13.682753   31648 command_runner.go:130] >      seccomp
	I1003 18:08:13.682756   31648 command_runner.go:130] >      apparmor
	I1003 18:08:13.682759   31648 command_runner.go:130] >      selinux
	I1003 18:08:13.682763   31648 command_runner.go:130] >    LDFlags:          unknown
	I1003 18:08:13.682770   31648 command_runner.go:130] >    SeccompEnabled:   true
	I1003 18:08:13.682774   31648 command_runner.go:130] >    AppArmorEnabled:  false
	I1003 18:08:13.685817   31648 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 18:08:13.686852   31648 cli_runner.go:164] Run: docker network inspect functional-889240 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:08:13.703291   31648 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 18:08:13.707207   31648 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1003 18:08:13.707295   31648 kubeadm.go:883] updating cluster {Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 18:08:13.707417   31648 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:08:13.707473   31648 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:08:13.737725   31648 command_runner.go:130] > {
	I1003 18:08:13.737745   31648 command_runner.go:130] >   "images":  [
	I1003 18:08:13.737749   31648 command_runner.go:130] >     {
	I1003 18:08:13.737755   31648 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1003 18:08:13.737763   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.737773   31648 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1003 18:08:13.737780   31648 command_runner.go:130] >       ],
	I1003 18:08:13.737786   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.737798   31648 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1003 18:08:13.737807   31648 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1003 18:08:13.737811   31648 command_runner.go:130] >       ],
	I1003 18:08:13.737815   31648 command_runner.go:130] >       "size":  "109379124",
	I1003 18:08:13.737819   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.737828   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.737832   31648 command_runner.go:130] >     },
	I1003 18:08:13.737835   31648 command_runner.go:130] >     {
	I1003 18:08:13.737841   31648 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1003 18:08:13.737848   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.737859   31648 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1003 18:08:13.737868   31648 command_runner.go:130] >       ],
	I1003 18:08:13.737875   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.737886   31648 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1003 18:08:13.737898   31648 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1003 18:08:13.737904   31648 command_runner.go:130] >       ],
	I1003 18:08:13.737908   31648 command_runner.go:130] >       "size":  "31470524",
	I1003 18:08:13.737914   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.737920   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.737931   31648 command_runner.go:130] >     },
	I1003 18:08:13.737939   31648 command_runner.go:130] >     {
	I1003 18:08:13.737948   31648 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1003 18:08:13.737958   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.737969   31648 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1003 18:08:13.737987   31648 command_runner.go:130] >       ],
	I1003 18:08:13.737995   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738007   31648 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1003 18:08:13.738023   31648 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1003 18:08:13.738031   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738037   31648 command_runner.go:130] >       "size":  "76103547",
	I1003 18:08:13.738045   31648 command_runner.go:130] >       "username":  "nonroot",
	I1003 18:08:13.738049   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.738054   31648 command_runner.go:130] >     },
	I1003 18:08:13.738058   31648 command_runner.go:130] >     {
	I1003 18:08:13.738070   31648 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1003 18:08:13.738081   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.738091   31648 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1003 18:08:13.738100   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738110   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738124   31648 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1003 18:08:13.738137   31648 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1003 18:08:13.738143   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738148   31648 command_runner.go:130] >       "size":  "195976448",
	I1003 18:08:13.738155   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.738165   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.738175   31648 command_runner.go:130] >       },
	I1003 18:08:13.738187   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.738197   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.738205   31648 command_runner.go:130] >     },
	I1003 18:08:13.738212   31648 command_runner.go:130] >     {
	I1003 18:08:13.738223   31648 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1003 18:08:13.738230   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.738236   31648 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1003 18:08:13.738245   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738256   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738270   31648 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1003 18:08:13.738285   31648 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1003 18:08:13.738293   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738301   31648 command_runner.go:130] >       "size":  "89046001",
	I1003 18:08:13.738308   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.738312   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.738315   31648 command_runner.go:130] >       },
	I1003 18:08:13.738320   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.738329   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.738338   31648 command_runner.go:130] >     },
	I1003 18:08:13.738344   31648 command_runner.go:130] >     {
	I1003 18:08:13.738357   31648 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1003 18:08:13.738366   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.738377   31648 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1003 18:08:13.738386   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738395   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738402   31648 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1003 18:08:13.738418   31648 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1003 18:08:13.738427   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738434   31648 command_runner.go:130] >       "size":  "76004181",
	I1003 18:08:13.738443   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.738453   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.738460   31648 command_runner.go:130] >       },
	I1003 18:08:13.738467   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.738475   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.738480   31648 command_runner.go:130] >     },
	I1003 18:08:13.738484   31648 command_runner.go:130] >     {
	I1003 18:08:13.738493   31648 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1003 18:08:13.738502   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.738514   31648 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1003 18:08:13.738522   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738531   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738545   31648 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1003 18:08:13.738560   31648 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1003 18:08:13.738568   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738572   31648 command_runner.go:130] >       "size":  "73138073",
	I1003 18:08:13.738580   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.738586   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.738595   31648 command_runner.go:130] >     },
	I1003 18:08:13.738605   31648 command_runner.go:130] >     {
	I1003 18:08:13.738617   31648 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1003 18:08:13.738625   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.738634   31648 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1003 18:08:13.738642   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738648   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738658   31648 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1003 18:08:13.738674   31648 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1003 18:08:13.738683   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738693   31648 command_runner.go:130] >       "size":  "53844823",
	I1003 18:08:13.738702   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.738710   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.738718   31648 command_runner.go:130] >       },
	I1003 18:08:13.738724   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.738733   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.738743   31648 command_runner.go:130] >     },
	I1003 18:08:13.738747   31648 command_runner.go:130] >     {
	I1003 18:08:13.738756   31648 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1003 18:08:13.738766   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.738777   31648 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1003 18:08:13.738785   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738792   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738806   31648 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1003 18:08:13.738819   31648 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1003 18:08:13.738827   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738832   31648 command_runner.go:130] >       "size":  "742092",
	I1003 18:08:13.738838   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.738843   31648 command_runner.go:130] >         "value":  "65535"
	I1003 18:08:13.738851   31648 command_runner.go:130] >       },
	I1003 18:08:13.738862   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.738871   31648 command_runner.go:130] >       "pinned":  true
	I1003 18:08:13.738885   31648 command_runner.go:130] >     }
	I1003 18:08:13.738890   31648 command_runner.go:130] >   ]
	I1003 18:08:13.738898   31648 command_runner.go:130] > }
	I1003 18:08:13.739109   31648 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:08:13.739126   31648 crio.go:433] Images already preloaded, skipping extraction
	I1003 18:08:13.739173   31648 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:08:13.761526   31648 command_runner.go:130] > {
	I1003 18:08:13.761550   31648 command_runner.go:130] >   "images":  [
	I1003 18:08:13.761558   31648 command_runner.go:130] >     {
	I1003 18:08:13.761569   31648 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1003 18:08:13.761577   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.761586   31648 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1003 18:08:13.761592   31648 command_runner.go:130] >       ],
	I1003 18:08:13.761599   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.761616   31648 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1003 18:08:13.761631   31648 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1003 18:08:13.761639   31648 command_runner.go:130] >       ],
	I1003 18:08:13.761646   31648 command_runner.go:130] >       "size":  "109379124",
	I1003 18:08:13.761659   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.761672   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.761681   31648 command_runner.go:130] >     },
	I1003 18:08:13.761686   31648 command_runner.go:130] >     {
	I1003 18:08:13.761698   31648 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1003 18:08:13.761708   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.761719   31648 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1003 18:08:13.761728   31648 command_runner.go:130] >       ],
	I1003 18:08:13.761737   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.761753   31648 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1003 18:08:13.761770   31648 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1003 18:08:13.761779   31648 command_runner.go:130] >       ],
	I1003 18:08:13.761789   31648 command_runner.go:130] >       "size":  "31470524",
	I1003 18:08:13.761799   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.761810   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.761818   31648 command_runner.go:130] >     },
	I1003 18:08:13.761823   31648 command_runner.go:130] >     {
	I1003 18:08:13.761836   31648 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1003 18:08:13.761845   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.761852   31648 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1003 18:08:13.761860   31648 command_runner.go:130] >       ],
	I1003 18:08:13.761866   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.761879   31648 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1003 18:08:13.761889   31648 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1003 18:08:13.761897   31648 command_runner.go:130] >       ],
	I1003 18:08:13.761903   31648 command_runner.go:130] >       "size":  "76103547",
	I1003 18:08:13.761913   31648 command_runner.go:130] >       "username":  "nonroot",
	I1003 18:08:13.761922   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.761934   31648 command_runner.go:130] >     },
	I1003 18:08:13.761942   31648 command_runner.go:130] >     {
	I1003 18:08:13.761952   31648 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1003 18:08:13.761960   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.761970   31648 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1003 18:08:13.762000   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762008   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.762019   31648 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1003 18:08:13.762032   31648 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1003 18:08:13.762041   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762051   31648 command_runner.go:130] >       "size":  "195976448",
	I1003 18:08:13.762060   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.762068   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.762074   31648 command_runner.go:130] >       },
	I1003 18:08:13.762087   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.762097   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.762101   31648 command_runner.go:130] >     },
	I1003 18:08:13.762109   31648 command_runner.go:130] >     {
	I1003 18:08:13.762117   31648 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1003 18:08:13.762126   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.762135   31648 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1003 18:08:13.762143   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762149   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.762163   31648 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1003 18:08:13.762178   31648 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1003 18:08:13.762186   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762193   31648 command_runner.go:130] >       "size":  "89046001",
	I1003 18:08:13.762202   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.762212   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.762221   31648 command_runner.go:130] >       },
	I1003 18:08:13.762229   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.762239   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.762248   31648 command_runner.go:130] >     },
	I1003 18:08:13.762256   31648 command_runner.go:130] >     {
	I1003 18:08:13.762265   31648 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1003 18:08:13.762275   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.762284   31648 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1003 18:08:13.762292   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762303   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.762319   31648 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1003 18:08:13.762335   31648 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1003 18:08:13.762343   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762353   31648 command_runner.go:130] >       "size":  "76004181",
	I1003 18:08:13.762361   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.762367   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.762374   31648 command_runner.go:130] >       },
	I1003 18:08:13.762380   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.762388   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.762392   31648 command_runner.go:130] >     },
	I1003 18:08:13.762401   31648 command_runner.go:130] >     {
	I1003 18:08:13.762412   31648 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1003 18:08:13.762422   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.762431   31648 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1003 18:08:13.762438   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762444   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.762456   31648 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1003 18:08:13.762468   31648 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1003 18:08:13.762477   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762487   31648 command_runner.go:130] >       "size":  "73138073",
	I1003 18:08:13.762497   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.762506   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.762515   31648 command_runner.go:130] >     },
	I1003 18:08:13.762523   31648 command_runner.go:130] >     {
	I1003 18:08:13.762533   31648 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1003 18:08:13.762539   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.762547   31648 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1003 18:08:13.762552   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762559   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.762570   31648 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1003 18:08:13.762593   31648 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1003 18:08:13.762602   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762608   31648 command_runner.go:130] >       "size":  "53844823",
	I1003 18:08:13.762616   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.762623   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.762630   31648 command_runner.go:130] >       },
	I1003 18:08:13.762636   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.762645   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.762653   31648 command_runner.go:130] >     },
	I1003 18:08:13.762657   31648 command_runner.go:130] >     {
	I1003 18:08:13.762665   31648 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1003 18:08:13.762671   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.762681   31648 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1003 18:08:13.762686   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762695   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.762706   31648 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1003 18:08:13.762720   31648 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1003 18:08:13.762728   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762732   31648 command_runner.go:130] >       "size":  "742092",
	I1003 18:08:13.762737   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.762742   31648 command_runner.go:130] >         "value":  "65535"
	I1003 18:08:13.762747   31648 command_runner.go:130] >       },
	I1003 18:08:13.762751   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.762757   31648 command_runner.go:130] >       "pinned":  true
	I1003 18:08:13.762761   31648 command_runner.go:130] >     }
	I1003 18:08:13.762766   31648 command_runner.go:130] >   ]
	I1003 18:08:13.762769   31648 command_runner.go:130] > }
	I1003 18:08:13.763568   31648 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:08:13.763587   31648 cache_images.go:85] Images are preloaded, skipping loading
	I1003 18:08:13.763596   31648 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1003 18:08:13.763703   31648 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-889240 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 18:08:13.763779   31648 ssh_runner.go:195] Run: crio config
	I1003 18:08:13.802487   31648 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1003 18:08:13.802512   31648 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1003 18:08:13.802523   31648 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1003 18:08:13.802528   31648 command_runner.go:130] > #
	I1003 18:08:13.802538   31648 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1003 18:08:13.802546   31648 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1003 18:08:13.802555   31648 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1003 18:08:13.802566   31648 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1003 18:08:13.802572   31648 command_runner.go:130] > # reload'.
	I1003 18:08:13.802583   31648 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1003 18:08:13.802595   31648 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1003 18:08:13.802606   31648 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1003 18:08:13.802615   31648 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1003 18:08:13.802622   31648 command_runner.go:130] > [crio]
	I1003 18:08:13.802632   31648 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1003 18:08:13.802640   31648 command_runner.go:130] > # containers images, in this directory.
	I1003 18:08:13.802653   31648 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1003 18:08:13.802671   31648 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1003 18:08:13.802680   31648 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1003 18:08:13.802693   31648 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1003 18:08:13.802704   31648 command_runner.go:130] > # imagestore = ""
	I1003 18:08:13.802714   31648 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1003 18:08:13.802726   31648 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1003 18:08:13.802736   31648 command_runner.go:130] > # storage_driver = "overlay"
	I1003 18:08:13.802747   31648 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1003 18:08:13.802761   31648 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1003 18:08:13.802770   31648 command_runner.go:130] > # storage_option = [
	I1003 18:08:13.802777   31648 command_runner.go:130] > # ]
	I1003 18:08:13.802788   31648 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1003 18:08:13.802800   31648 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1003 18:08:13.802808   31648 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1003 18:08:13.802820   31648 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1003 18:08:13.802830   31648 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1003 18:08:13.802835   31648 command_runner.go:130] > # always happen on a node reboot
	I1003 18:08:13.802840   31648 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1003 18:08:13.802849   31648 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1003 18:08:13.802860   31648 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1003 18:08:13.802865   31648 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1003 18:08:13.802871   31648 command_runner.go:130] > # version_file_persist = ""
	I1003 18:08:13.802882   31648 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1003 18:08:13.802899   31648 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1003 18:08:13.802906   31648 command_runner.go:130] > # internal_wipe = true
	I1003 18:08:13.802917   31648 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1003 18:08:13.802929   31648 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1003 18:08:13.802935   31648 command_runner.go:130] > # internal_repair = true
	I1003 18:08:13.802943   31648 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1003 18:08:13.802953   31648 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1003 18:08:13.802966   31648 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1003 18:08:13.802985   31648 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1003 18:08:13.802996   31648 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1003 18:08:13.803006   31648 command_runner.go:130] > [crio.api]
	I1003 18:08:13.803015   31648 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1003 18:08:13.803025   31648 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1003 18:08:13.803033   31648 command_runner.go:130] > # IP address on which the stream server will listen.
	I1003 18:08:13.803043   31648 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1003 18:08:13.803054   31648 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1003 18:08:13.803065   31648 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1003 18:08:13.803072   31648 command_runner.go:130] > # stream_port = "0"
	I1003 18:08:13.803083   31648 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1003 18:08:13.803090   31648 command_runner.go:130] > # stream_enable_tls = false
	I1003 18:08:13.803102   31648 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1003 18:08:13.803114   31648 command_runner.go:130] > # stream_idle_timeout = ""
	I1003 18:08:13.803124   31648 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1003 18:08:13.803136   31648 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1003 18:08:13.803146   31648 command_runner.go:130] > # stream_tls_cert = ""
	I1003 18:08:13.803156   31648 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1003 18:08:13.803166   31648 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1003 18:08:13.803175   31648 command_runner.go:130] > # stream_tls_key = ""
	I1003 18:08:13.803185   31648 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1003 18:08:13.803197   31648 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1003 18:08:13.803202   31648 command_runner.go:130] > # automatically pick up the changes.
	I1003 18:08:13.803207   31648 command_runner.go:130] > # stream_tls_ca = ""
	I1003 18:08:13.803271   31648 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1003 18:08:13.803286   31648 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1003 18:08:13.803296   31648 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1003 18:08:13.803308   31648 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1003 18:08:13.803318   31648 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1003 18:08:13.803331   31648 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1003 18:08:13.803338   31648 command_runner.go:130] > [crio.runtime]
	I1003 18:08:13.803350   31648 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1003 18:08:13.803358   31648 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1003 18:08:13.803367   31648 command_runner.go:130] > # "nofile=1024:2048"
	I1003 18:08:13.803378   31648 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1003 18:08:13.803388   31648 command_runner.go:130] > # default_ulimits = [
	I1003 18:08:13.803393   31648 command_runner.go:130] > # ]
	I1003 18:08:13.803403   31648 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1003 18:08:13.803409   31648 command_runner.go:130] > # no_pivot = false
	I1003 18:08:13.803422   31648 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1003 18:08:13.803432   31648 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1003 18:08:13.803444   31648 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1003 18:08:13.803455   31648 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1003 18:08:13.803462   31648 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1003 18:08:13.803473   31648 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1003 18:08:13.803482   31648 command_runner.go:130] > # conmon = ""
	I1003 18:08:13.803489   31648 command_runner.go:130] > # Cgroup setting for conmon
	I1003 18:08:13.803504   31648 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1003 18:08:13.803513   31648 command_runner.go:130] > conmon_cgroup = "pod"
	I1003 18:08:13.803523   31648 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1003 18:08:13.803534   31648 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1003 18:08:13.803545   31648 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1003 18:08:13.803554   31648 command_runner.go:130] > # conmon_env = [
	I1003 18:08:13.803560   31648 command_runner.go:130] > # ]
	I1003 18:08:13.803573   31648 command_runner.go:130] > # Additional environment variables to set for all the
	I1003 18:08:13.803583   31648 command_runner.go:130] > # containers. These are overridden if set in the
	I1003 18:08:13.803595   31648 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1003 18:08:13.803603   31648 command_runner.go:130] > # default_env = [
	I1003 18:08:13.803611   31648 command_runner.go:130] > # ]
	I1003 18:08:13.803620   31648 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1003 18:08:13.803635   31648 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1003 18:08:13.803644   31648 command_runner.go:130] > # selinux = false
	I1003 18:08:13.803657   31648 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1003 18:08:13.803681   31648 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1003 18:08:13.803693   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.803703   31648 command_runner.go:130] > # seccomp_profile = ""
	I1003 18:08:13.803714   31648 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1003 18:08:13.803725   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.803735   31648 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1003 18:08:13.803746   31648 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1003 18:08:13.803760   31648 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1003 18:08:13.803772   31648 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1003 18:08:13.803785   31648 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1003 18:08:13.803796   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.803803   31648 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1003 18:08:13.803817   31648 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1003 18:08:13.803827   31648 command_runner.go:130] > # the cgroup blockio controller.
	I1003 18:08:13.803833   31648 command_runner.go:130] > # blockio_config_file = ""
	I1003 18:08:13.803847   31648 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1003 18:08:13.803856   31648 command_runner.go:130] > # blockio parameters.
	I1003 18:08:13.803862   31648 command_runner.go:130] > # blockio_reload = false
	I1003 18:08:13.803869   31648 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1003 18:08:13.803877   31648 command_runner.go:130] > # irqbalance daemon.
	I1003 18:08:13.803883   31648 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1003 18:08:13.803890   31648 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1003 18:08:13.803906   31648 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1003 18:08:13.803916   31648 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1003 18:08:13.803925   31648 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1003 18:08:13.803933   31648 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1003 18:08:13.803939   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.803951   31648 command_runner.go:130] > # rdt_config_file = ""
	I1003 18:08:13.803958   31648 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1003 18:08:13.803970   31648 command_runner.go:130] > # cgroup_manager = "systemd"
	I1003 18:08:13.803987   31648 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1003 18:08:13.803998   31648 command_runner.go:130] > # separate_pull_cgroup = ""
	I1003 18:08:13.804008   31648 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1003 18:08:13.804017   31648 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1003 18:08:13.804026   31648 command_runner.go:130] > # will be added.
	I1003 18:08:13.804035   31648 command_runner.go:130] > # default_capabilities = [
	I1003 18:08:13.804043   31648 command_runner.go:130] > # 	"CHOWN",
	I1003 18:08:13.804050   31648 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1003 18:08:13.804055   31648 command_runner.go:130] > # 	"FSETID",
	I1003 18:08:13.804066   31648 command_runner.go:130] > # 	"FOWNER",
	I1003 18:08:13.804071   31648 command_runner.go:130] > # 	"SETGID",
	I1003 18:08:13.804087   31648 command_runner.go:130] > # 	"SETUID",
	I1003 18:08:13.804093   31648 command_runner.go:130] > # 	"SETPCAP",
	I1003 18:08:13.804097   31648 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1003 18:08:13.804102   31648 command_runner.go:130] > # 	"KILL",
	I1003 18:08:13.804105   31648 command_runner.go:130] > # ]
	I1003 18:08:13.804112   31648 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1003 18:08:13.804121   31648 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1003 18:08:13.804125   31648 command_runner.go:130] > # add_inheritable_capabilities = false
	I1003 18:08:13.804133   31648 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1003 18:08:13.804138   31648 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1003 18:08:13.804143   31648 command_runner.go:130] > default_sysctls = [
	I1003 18:08:13.804147   31648 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1003 18:08:13.804150   31648 command_runner.go:130] > ]
	I1003 18:08:13.804157   31648 command_runner.go:130] > # List of devices on the host that a
	I1003 18:08:13.804163   31648 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1003 18:08:13.804169   31648 command_runner.go:130] > # allowed_devices = [
	I1003 18:08:13.804173   31648 command_runner.go:130] > # 	"/dev/fuse",
	I1003 18:08:13.804178   31648 command_runner.go:130] > # 	"/dev/net/tun",
	I1003 18:08:13.804181   31648 command_runner.go:130] > # ]
	I1003 18:08:13.804188   31648 command_runner.go:130] > # List of additional devices. specified as
	I1003 18:08:13.804194   31648 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1003 18:08:13.804201   31648 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1003 18:08:13.804207   31648 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1003 18:08:13.804212   31648 command_runner.go:130] > # additional_devices = [
	I1003 18:08:13.804215   31648 command_runner.go:130] > # ]
	I1003 18:08:13.804222   31648 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1003 18:08:13.804226   31648 command_runner.go:130] > # cdi_spec_dirs = [
	I1003 18:08:13.804231   31648 command_runner.go:130] > # 	"/etc/cdi",
	I1003 18:08:13.804235   31648 command_runner.go:130] > # 	"/var/run/cdi",
	I1003 18:08:13.804237   31648 command_runner.go:130] > # ]
	I1003 18:08:13.804243   31648 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1003 18:08:13.804251   31648 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1003 18:08:13.804254   31648 command_runner.go:130] > # Defaults to false.
	I1003 18:08:13.804261   31648 command_runner.go:130] > # device_ownership_from_security_context = false
	I1003 18:08:13.804268   31648 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1003 18:08:13.804275   31648 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1003 18:08:13.804279   31648 command_runner.go:130] > # hooks_dir = [
	I1003 18:08:13.804286   31648 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1003 18:08:13.804290   31648 command_runner.go:130] > # ]
	I1003 18:08:13.804297   31648 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1003 18:08:13.804303   31648 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1003 18:08:13.804309   31648 command_runner.go:130] > # its default mounts from the following two files:
	I1003 18:08:13.804312   31648 command_runner.go:130] > #
	I1003 18:08:13.804320   31648 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1003 18:08:13.804326   31648 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1003 18:08:13.804333   31648 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1003 18:08:13.804336   31648 command_runner.go:130] > #
	I1003 18:08:13.804342   31648 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1003 18:08:13.804349   31648 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1003 18:08:13.804356   31648 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1003 18:08:13.804363   31648 command_runner.go:130] > #      only add mounts it finds in this file.
	I1003 18:08:13.804366   31648 command_runner.go:130] > #
	I1003 18:08:13.804372   31648 command_runner.go:130] > # default_mounts_file = ""
	I1003 18:08:13.804376   31648 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1003 18:08:13.804384   31648 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1003 18:08:13.804388   31648 command_runner.go:130] > # pids_limit = -1
	I1003 18:08:13.804396   31648 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1003 18:08:13.804401   31648 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1003 18:08:13.804409   31648 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1003 18:08:13.804417   31648 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1003 18:08:13.804422   31648 command_runner.go:130] > # log_size_max = -1
	I1003 18:08:13.804429   31648 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1003 18:08:13.804435   31648 command_runner.go:130] > # log_to_journald = false
	I1003 18:08:13.804441   31648 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1003 18:08:13.804447   31648 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1003 18:08:13.804451   31648 command_runner.go:130] > # Path to directory for container attach sockets.
	I1003 18:08:13.804458   31648 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1003 18:08:13.804463   31648 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1003 18:08:13.804469   31648 command_runner.go:130] > # bind_mount_prefix = ""
	I1003 18:08:13.804473   31648 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1003 18:08:13.804479   31648 command_runner.go:130] > # read_only = false
	I1003 18:08:13.804486   31648 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1003 18:08:13.804494   31648 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1003 18:08:13.804497   31648 command_runner.go:130] > # live configuration reload.
	I1003 18:08:13.804501   31648 command_runner.go:130] > # log_level = "info"
	I1003 18:08:13.804508   31648 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1003 18:08:13.804513   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.804519   31648 command_runner.go:130] > # log_filter = ""
	I1003 18:08:13.804524   31648 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1003 18:08:13.804532   31648 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1003 18:08:13.804535   31648 command_runner.go:130] > # separated by comma.
	I1003 18:08:13.804544   31648 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1003 18:08:13.804551   31648 command_runner.go:130] > # uid_mappings = ""
	I1003 18:08:13.804557   31648 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1003 18:08:13.804564   31648 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1003 18:08:13.804569   31648 command_runner.go:130] > # separated by comma.
	I1003 18:08:13.804578   31648 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1003 18:08:13.804582   31648 command_runner.go:130] > # gid_mappings = ""
	I1003 18:08:13.804589   31648 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1003 18:08:13.804595   31648 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1003 18:08:13.804603   31648 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1003 18:08:13.804612   31648 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1003 18:08:13.804618   31648 command_runner.go:130] > # minimum_mappable_uid = -1
	I1003 18:08:13.804624   31648 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1003 18:08:13.804631   31648 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1003 18:08:13.804636   31648 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1003 18:08:13.804645   31648 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1003 18:08:13.804651   31648 command_runner.go:130] > # minimum_mappable_gid = -1
	I1003 18:08:13.804657   31648 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1003 18:08:13.804669   31648 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1003 18:08:13.804674   31648 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1003 18:08:13.804680   31648 command_runner.go:130] > # ctr_stop_timeout = 30
	I1003 18:08:13.804685   31648 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1003 18:08:13.804693   31648 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1003 18:08:13.804697   31648 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1003 18:08:13.804703   31648 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1003 18:08:13.804707   31648 command_runner.go:130] > # drop_infra_ctr = true
	I1003 18:08:13.804715   31648 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1003 18:08:13.804720   31648 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1003 18:08:13.804728   31648 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1003 18:08:13.804735   31648 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1003 18:08:13.804742   31648 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1003 18:08:13.804749   31648 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1003 18:08:13.804754   31648 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1003 18:08:13.804761   31648 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1003 18:08:13.804765   31648 command_runner.go:130] > # shared_cpuset = ""
	I1003 18:08:13.804773   31648 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1003 18:08:13.804777   31648 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1003 18:08:13.804783   31648 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1003 18:08:13.804789   31648 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1003 18:08:13.804795   31648 command_runner.go:130] > # pinns_path = ""
	I1003 18:08:13.804800   31648 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1003 18:08:13.804808   31648 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1003 18:08:13.804813   31648 command_runner.go:130] > # enable_criu_support = true
	I1003 18:08:13.804819   31648 command_runner.go:130] > # Enable/disable the generation of the container,
	I1003 18:08:13.804825   31648 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1003 18:08:13.804832   31648 command_runner.go:130] > # enable_pod_events = false
	I1003 18:08:13.804837   31648 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1003 18:08:13.804844   31648 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1003 18:08:13.804848   31648 command_runner.go:130] > # default_runtime = "crun"
	I1003 18:08:13.804855   31648 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1003 18:08:13.804862   31648 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1003 18:08:13.804874   31648 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1003 18:08:13.804881   31648 command_runner.go:130] > # creation as a file is not desired either.
	I1003 18:08:13.804889   31648 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1003 18:08:13.804896   31648 command_runner.go:130] > # the hostname is being managed dynamically.
	I1003 18:08:13.804900   31648 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1003 18:08:13.804905   31648 command_runner.go:130] > # ]
	I1003 18:08:13.804912   31648 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1003 18:08:13.804920   31648 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1003 18:08:13.804926   31648 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1003 18:08:13.804931   31648 command_runner.go:130] > # Each entry in the table should follow the format:
	I1003 18:08:13.804934   31648 command_runner.go:130] > #
	I1003 18:08:13.804941   31648 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1003 18:08:13.804945   31648 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1003 18:08:13.804952   31648 command_runner.go:130] > # runtime_type = "oci"
	I1003 18:08:13.804956   31648 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1003 18:08:13.804963   31648 command_runner.go:130] > # inherit_default_runtime = false
	I1003 18:08:13.804968   31648 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1003 18:08:13.804988   31648 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1003 18:08:13.804996   31648 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1003 18:08:13.805005   31648 command_runner.go:130] > # monitor_env = []
	I1003 18:08:13.805011   31648 command_runner.go:130] > # privileged_without_host_devices = false
	I1003 18:08:13.805017   31648 command_runner.go:130] > # allowed_annotations = []
	I1003 18:08:13.805022   31648 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1003 18:08:13.805028   31648 command_runner.go:130] > # no_sync_log = false
	I1003 18:08:13.805032   31648 command_runner.go:130] > # default_annotations = {}
	I1003 18:08:13.805038   31648 command_runner.go:130] > # stream_websockets = false
	I1003 18:08:13.805042   31648 command_runner.go:130] > # seccomp_profile = ""
	I1003 18:08:13.805062   31648 command_runner.go:130] > # Where:
	I1003 18:08:13.805069   31648 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1003 18:08:13.805075   31648 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1003 18:08:13.805081   31648 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1003 18:08:13.805089   31648 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1003 18:08:13.805092   31648 command_runner.go:130] > #   in $PATH.
	I1003 18:08:13.805100   31648 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1003 18:08:13.805105   31648 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1003 18:08:13.805112   31648 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1003 18:08:13.805115   31648 command_runner.go:130] > #   state.
	I1003 18:08:13.805121   31648 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1003 18:08:13.805128   31648 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1003 18:08:13.805133   31648 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1003 18:08:13.805141   31648 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1003 18:08:13.805146   31648 command_runner.go:130] > #   the values from the default runtime on load time.
	I1003 18:08:13.805153   31648 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1003 18:08:13.805158   31648 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1003 18:08:13.805165   31648 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1003 18:08:13.805177   31648 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1003 18:08:13.805183   31648 command_runner.go:130] > #   The currently recognized values are:
	I1003 18:08:13.805190   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1003 18:08:13.805199   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1003 18:08:13.805207   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1003 18:08:13.805214   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1003 18:08:13.805221   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1003 18:08:13.805229   31648 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1003 18:08:13.805235   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1003 18:08:13.805243   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1003 18:08:13.805251   31648 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1003 18:08:13.805257   31648 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1003 18:08:13.805265   31648 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1003 18:08:13.805273   31648 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1003 18:08:13.805278   31648 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1003 18:08:13.805285   31648 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1003 18:08:13.805291   31648 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1003 18:08:13.805300   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1003 18:08:13.805308   31648 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1003 18:08:13.805312   31648 command_runner.go:130] > #   deprecated option "conmon".
	I1003 18:08:13.805319   31648 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1003 18:08:13.805326   31648 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1003 18:08:13.805332   31648 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1003 18:08:13.805339   31648 command_runner.go:130] > #   should be moved to the container's cgroup
	I1003 18:08:13.805346   31648 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1003 18:08:13.805352   31648 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1003 18:08:13.805358   31648 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1003 18:08:13.805364   31648 command_runner.go:130] > #   conmon-rs by using:
	I1003 18:08:13.805370   31648 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1003 18:08:13.805379   31648 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1003 18:08:13.805388   31648 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1003 18:08:13.805395   31648 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1003 18:08:13.805401   31648 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1003 18:08:13.805415   31648 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1003 18:08:13.805423   31648 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1003 18:08:13.805430   31648 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1003 18:08:13.805437   31648 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1003 18:08:13.805449   31648 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1003 18:08:13.805455   31648 command_runner.go:130] > #   when a machine crash happens.
	I1003 18:08:13.805462   31648 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1003 18:08:13.805471   31648 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1003 18:08:13.805480   31648 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1003 18:08:13.805485   31648 command_runner.go:130] > #   seccomp profile for the runtime.
	I1003 18:08:13.805491   31648 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1003 18:08:13.805499   31648 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1003 18:08:13.805504   31648 command_runner.go:130] > #
	I1003 18:08:13.805508   31648 command_runner.go:130] > # Using the seccomp notifier feature:
	I1003 18:08:13.805513   31648 command_runner.go:130] > #
	I1003 18:08:13.805518   31648 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1003 18:08:13.805528   31648 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1003 18:08:13.805533   31648 command_runner.go:130] > #
	I1003 18:08:13.805539   31648 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1003 18:08:13.805547   31648 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1003 18:08:13.805549   31648 command_runner.go:130] > #
	I1003 18:08:13.805555   31648 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1003 18:08:13.805560   31648 command_runner.go:130] > # feature.
	I1003 18:08:13.805563   31648 command_runner.go:130] > #
	I1003 18:08:13.805568   31648 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1003 18:08:13.805576   31648 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1003 18:08:13.805582   31648 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1003 18:08:13.805589   31648 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1003 18:08:13.805595   31648 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1003 18:08:13.805600   31648 command_runner.go:130] > #
	I1003 18:08:13.805605   31648 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1003 18:08:13.805614   31648 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1003 18:08:13.805619   31648 command_runner.go:130] > #
	I1003 18:08:13.805625   31648 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1003 18:08:13.805632   31648 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1003 18:08:13.805635   31648 command_runner.go:130] > #
	I1003 18:08:13.805641   31648 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1003 18:08:13.805649   31648 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1003 18:08:13.805652   31648 command_runner.go:130] > # limitation.
	I1003 18:08:13.805656   31648 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1003 18:08:13.805666   31648 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1003 18:08:13.805671   31648 command_runner.go:130] > runtime_type = ""
	I1003 18:08:13.805675   31648 command_runner.go:130] > runtime_root = "/run/crun"
	I1003 18:08:13.805679   31648 command_runner.go:130] > inherit_default_runtime = false
	I1003 18:08:13.805683   31648 command_runner.go:130] > runtime_config_path = ""
	I1003 18:08:13.805689   31648 command_runner.go:130] > container_min_memory = ""
	I1003 18:08:13.805694   31648 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1003 18:08:13.805700   31648 command_runner.go:130] > monitor_cgroup = "pod"
	I1003 18:08:13.805704   31648 command_runner.go:130] > monitor_exec_cgroup = ""
	I1003 18:08:13.805710   31648 command_runner.go:130] > allowed_annotations = [
	I1003 18:08:13.805714   31648 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1003 18:08:13.805718   31648 command_runner.go:130] > ]
	I1003 18:08:13.805722   31648 command_runner.go:130] > privileged_without_host_devices = false
	I1003 18:08:13.805728   31648 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1003 18:08:13.805733   31648 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1003 18:08:13.805738   31648 command_runner.go:130] > runtime_type = ""
	I1003 18:08:13.805742   31648 command_runner.go:130] > runtime_root = "/run/runc"
	I1003 18:08:13.805748   31648 command_runner.go:130] > inherit_default_runtime = false
	I1003 18:08:13.805751   31648 command_runner.go:130] > runtime_config_path = ""
	I1003 18:08:13.805758   31648 command_runner.go:130] > container_min_memory = ""
	I1003 18:08:13.805762   31648 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1003 18:08:13.805767   31648 command_runner.go:130] > monitor_cgroup = "pod"
	I1003 18:08:13.805771   31648 command_runner.go:130] > monitor_exec_cgroup = ""
	I1003 18:08:13.805778   31648 command_runner.go:130] > privileged_without_host_devices = false
	I1003 18:08:13.805784   31648 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1003 18:08:13.805790   31648 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1003 18:08:13.805796   31648 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1003 18:08:13.805805   31648 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1003 18:08:13.805817   31648 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1003 18:08:13.805828   31648 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1003 18:08:13.805837   31648 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1003 18:08:13.805842   31648 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1003 18:08:13.805852   31648 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1003 18:08:13.805860   31648 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1003 18:08:13.805867   31648 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1003 18:08:13.805873   31648 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1003 18:08:13.805878   31648 command_runner.go:130] > # Example:
	I1003 18:08:13.805882   31648 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1003 18:08:13.805886   31648 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1003 18:08:13.805893   31648 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1003 18:08:13.805899   31648 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1003 18:08:13.805903   31648 command_runner.go:130] > # cpuset = "0-1"
	I1003 18:08:13.805906   31648 command_runner.go:130] > # cpushares = "5"
	I1003 18:08:13.805910   31648 command_runner.go:130] > # cpuquota = "1000"
	I1003 18:08:13.805919   31648 command_runner.go:130] > # cpuperiod = "100000"
	I1003 18:08:13.805924   31648 command_runner.go:130] > # cpulimit = "35"
	I1003 18:08:13.805933   31648 command_runner.go:130] > # Where:
	I1003 18:08:13.805940   31648 command_runner.go:130] > # The workload name is workload-type.
	I1003 18:08:13.805950   31648 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1003 18:08:13.805955   31648 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1003 18:08:13.805960   31648 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1003 18:08:13.805971   31648 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1003 18:08:13.805994   31648 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1003 18:08:13.806006   31648 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1003 18:08:13.806019   31648 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1003 18:08:13.806027   31648 command_runner.go:130] > # Default value is set to true
	I1003 18:08:13.806031   31648 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1003 18:08:13.806036   31648 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1003 18:08:13.806040   31648 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1003 18:08:13.806047   31648 command_runner.go:130] > # Default value is set to 'false'
	I1003 18:08:13.806052   31648 command_runner.go:130] > # disable_hostport_mapping = false
	I1003 18:08:13.806057   31648 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1003 18:08:13.806066   31648 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1003 18:08:13.806074   31648 command_runner.go:130] > # timezone = ""
	I1003 18:08:13.806085   31648 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1003 18:08:13.806093   31648 command_runner.go:130] > #
	I1003 18:08:13.806105   31648 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1003 18:08:13.806116   31648 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1003 18:08:13.806122   31648 command_runner.go:130] > [crio.image]
	I1003 18:08:13.806127   31648 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1003 18:08:13.806134   31648 command_runner.go:130] > # default_transport = "docker://"
	I1003 18:08:13.806139   31648 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1003 18:08:13.806147   31648 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1003 18:08:13.806154   31648 command_runner.go:130] > # global_auth_file = ""
	I1003 18:08:13.806159   31648 command_runner.go:130] > # The image used to instantiate infra containers.
	I1003 18:08:13.806165   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.806170   31648 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1003 18:08:13.806178   31648 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1003 18:08:13.806185   31648 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1003 18:08:13.806190   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.806196   31648 command_runner.go:130] > # pause_image_auth_file = ""
	I1003 18:08:13.806202   31648 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1003 18:08:13.806209   31648 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1003 18:08:13.806215   31648 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1003 18:08:13.806220   31648 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1003 18:08:13.806226   31648 command_runner.go:130] > # pause_command = "/pause"
	I1003 18:08:13.806231   31648 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1003 18:08:13.806239   31648 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1003 18:08:13.806244   31648 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1003 18:08:13.806252   31648 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1003 18:08:13.806257   31648 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1003 18:08:13.806264   31648 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1003 18:08:13.806268   31648 command_runner.go:130] > # pinned_images = [
	I1003 18:08:13.806271   31648 command_runner.go:130] > # ]
	I1003 18:08:13.806278   31648 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1003 18:08:13.806286   31648 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1003 18:08:13.806293   31648 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1003 18:08:13.806301   31648 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1003 18:08:13.806306   31648 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1003 18:08:13.806312   31648 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1003 18:08:13.806318   31648 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1003 18:08:13.806325   31648 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1003 18:08:13.806333   31648 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1003 18:08:13.806341   31648 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1003 18:08:13.806347   31648 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1003 18:08:13.806353   31648 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1003 18:08:13.806358   31648 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1003 18:08:13.806366   31648 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1003 18:08:13.806369   31648 command_runner.go:130] > # changing them here.
	I1003 18:08:13.806374   31648 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1003 18:08:13.806380   31648 command_runner.go:130] > # insecure_registries = [
	I1003 18:08:13.806383   31648 command_runner.go:130] > # ]
	I1003 18:08:13.806391   31648 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1003 18:08:13.806398   31648 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1003 18:08:13.806404   31648 command_runner.go:130] > # image_volumes = "mkdir"
	I1003 18:08:13.806409   31648 command_runner.go:130] > # Temporary directory to use for storing big files
	I1003 18:08:13.806415   31648 command_runner.go:130] > # big_files_temporary_dir = ""
	I1003 18:08:13.806420   31648 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1003 18:08:13.806429   31648 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1003 18:08:13.806435   31648 command_runner.go:130] > # auto_reload_registries = false
	I1003 18:08:13.806441   31648 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1003 18:08:13.806450   31648 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1003 18:08:13.806467   31648 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1003 18:08:13.806473   31648 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1003 18:08:13.806477   31648 command_runner.go:130] > # The mode of short name resolution.
	I1003 18:08:13.806484   31648 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1003 18:08:13.806492   31648 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1003 18:08:13.806499   31648 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1003 18:08:13.806503   31648 command_runner.go:130] > # short_name_mode = "enforcing"
	I1003 18:08:13.806511   31648 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1003 18:08:13.806518   31648 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1003 18:08:13.806523   31648 command_runner.go:130] > # oci_artifact_mount_support = true
	I1003 18:08:13.806530   31648 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1003 18:08:13.806535   31648 command_runner.go:130] > # CNI plugins.
	I1003 18:08:13.806541   31648 command_runner.go:130] > [crio.network]
	I1003 18:08:13.806546   31648 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1003 18:08:13.806553   31648 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1003 18:08:13.806557   31648 command_runner.go:130] > # cni_default_network = ""
	I1003 18:08:13.806562   31648 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1003 18:08:13.806568   31648 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1003 18:08:13.806573   31648 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1003 18:08:13.806580   31648 command_runner.go:130] > # plugin_dirs = [
	I1003 18:08:13.806584   31648 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1003 18:08:13.806589   31648 command_runner.go:130] > # ]
	I1003 18:08:13.806593   31648 command_runner.go:130] > # List of included pod metrics.
	I1003 18:08:13.806599   31648 command_runner.go:130] > # included_pod_metrics = [
	I1003 18:08:13.806603   31648 command_runner.go:130] > # ]
	I1003 18:08:13.806610   31648 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1003 18:08:13.806614   31648 command_runner.go:130] > [crio.metrics]
	I1003 18:08:13.806618   31648 command_runner.go:130] > # Globally enable or disable metrics support.
	I1003 18:08:13.806624   31648 command_runner.go:130] > # enable_metrics = false
	I1003 18:08:13.806629   31648 command_runner.go:130] > # Specify enabled metrics collectors.
	I1003 18:08:13.806635   31648 command_runner.go:130] > # Per default all metrics are enabled.
	I1003 18:08:13.806640   31648 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1003 18:08:13.806647   31648 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1003 18:08:13.806654   31648 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1003 18:08:13.806662   31648 command_runner.go:130] > # metrics_collectors = [
	I1003 18:08:13.806668   31648 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1003 18:08:13.806672   31648 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1003 18:08:13.806676   31648 command_runner.go:130] > # 	"containers_oom_total",
	I1003 18:08:13.806679   31648 command_runner.go:130] > # 	"processes_defunct",
	I1003 18:08:13.806682   31648 command_runner.go:130] > # 	"operations_total",
	I1003 18:08:13.806687   31648 command_runner.go:130] > # 	"operations_latency_seconds",
	I1003 18:08:13.806691   31648 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1003 18:08:13.806694   31648 command_runner.go:130] > # 	"operations_errors_total",
	I1003 18:08:13.806697   31648 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1003 18:08:13.806701   31648 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1003 18:08:13.806705   31648 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1003 18:08:13.806709   31648 command_runner.go:130] > # 	"image_pulls_success_total",
	I1003 18:08:13.806713   31648 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1003 18:08:13.806716   31648 command_runner.go:130] > # 	"containers_oom_count_total",
	I1003 18:08:13.806720   31648 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1003 18:08:13.806724   31648 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1003 18:08:13.806728   31648 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1003 18:08:13.806730   31648 command_runner.go:130] > # ]
	I1003 18:08:13.806736   31648 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1003 18:08:13.806739   31648 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1003 18:08:13.806744   31648 command_runner.go:130] > # The port on which the metrics server will listen.
	I1003 18:08:13.806747   31648 command_runner.go:130] > # metrics_port = 9090
	I1003 18:08:13.806751   31648 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1003 18:08:13.806755   31648 command_runner.go:130] > # metrics_socket = ""
	I1003 18:08:13.806759   31648 command_runner.go:130] > # The certificate for the secure metrics server.
	I1003 18:08:13.806765   31648 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1003 18:08:13.806770   31648 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1003 18:08:13.806774   31648 command_runner.go:130] > # certificate on any modification event.
	I1003 18:08:13.806780   31648 command_runner.go:130] > # metrics_cert = ""
	I1003 18:08:13.806785   31648 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1003 18:08:13.806791   31648 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1003 18:08:13.806795   31648 command_runner.go:130] > # metrics_key = ""
	I1003 18:08:13.806802   31648 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1003 18:08:13.806805   31648 command_runner.go:130] > [crio.tracing]
	I1003 18:08:13.806810   31648 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1003 18:08:13.806816   31648 command_runner.go:130] > # enable_tracing = false
	I1003 18:08:13.806821   31648 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1003 18:08:13.806827   31648 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1003 18:08:13.806834   31648 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1003 18:08:13.806841   31648 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1003 18:08:13.806845   31648 command_runner.go:130] > # CRI-O NRI configuration.
	I1003 18:08:13.806850   31648 command_runner.go:130] > [crio.nri]
	I1003 18:08:13.806854   31648 command_runner.go:130] > # Globally enable or disable NRI.
	I1003 18:08:13.806860   31648 command_runner.go:130] > # enable_nri = true
	I1003 18:08:13.806864   31648 command_runner.go:130] > # NRI socket to listen on.
	I1003 18:08:13.806870   31648 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1003 18:08:13.806874   31648 command_runner.go:130] > # NRI plugin directory to use.
	I1003 18:08:13.806880   31648 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1003 18:08:13.806885   31648 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1003 18:08:13.806891   31648 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1003 18:08:13.806896   31648 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1003 18:08:13.806926   31648 command_runner.go:130] > # nri_disable_connections = false
	I1003 18:08:13.806934   31648 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1003 18:08:13.806938   31648 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1003 18:08:13.806944   31648 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1003 18:08:13.806948   31648 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1003 18:08:13.806955   31648 command_runner.go:130] > # NRI default validator configuration.
	I1003 18:08:13.806961   31648 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1003 18:08:13.806968   31648 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1003 18:08:13.806972   31648 command_runner.go:130] > # can be restricted/rejected:
	I1003 18:08:13.806990   31648 command_runner.go:130] > # - OCI hook injection
	I1003 18:08:13.806998   31648 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1003 18:08:13.807007   31648 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1003 18:08:13.807014   31648 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1003 18:08:13.807024   31648 command_runner.go:130] > # - adjustment of linux namespaces
	I1003 18:08:13.807033   31648 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1003 18:08:13.807041   31648 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1003 18:08:13.807046   31648 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1003 18:08:13.807051   31648 command_runner.go:130] > #
	I1003 18:08:13.807055   31648 command_runner.go:130] > # [crio.nri.default_validator]
	I1003 18:08:13.807060   31648 command_runner.go:130] > # nri_enable_default_validator = false
	I1003 18:08:13.807067   31648 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1003 18:08:13.807072   31648 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1003 18:08:13.807079   31648 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1003 18:08:13.807083   31648 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1003 18:08:13.807088   31648 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1003 18:08:13.807094   31648 command_runner.go:130] > # nri_validator_required_plugins = [
	I1003 18:08:13.807097   31648 command_runner.go:130] > # ]
	I1003 18:08:13.807104   31648 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1003 18:08:13.807109   31648 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1003 18:08:13.807115   31648 command_runner.go:130] > [crio.stats]
	I1003 18:08:13.807121   31648 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1003 18:08:13.807128   31648 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1003 18:08:13.807132   31648 command_runner.go:130] > # stats_collection_period = 0
	I1003 18:08:13.807141   31648 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1003 18:08:13.807147   31648 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1003 18:08:13.807154   31648 command_runner.go:130] > # collection_period = 0
	I1003 18:08:13.807173   31648 command_runner.go:130] ! time="2025-10-03T18:08:13.78773481Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1003 18:08:13.807183   31648 command_runner.go:130] ! time="2025-10-03T18:08:13.787758775Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1003 18:08:13.807194   31648 command_runner.go:130] ! time="2025-10-03T18:08:13.787775454Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1003 18:08:13.807203   31648 command_runner.go:130] ! time="2025-10-03T18:08:13.78779273Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1003 18:08:13.807213   31648 command_runner.go:130] ! time="2025-10-03T18:08:13.7878475Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.807222   31648 command_runner.go:130] ! time="2025-10-03T18:08:13.788021357Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1003 18:08:13.807234   31648 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1003 18:08:13.807290   31648 cni.go:84] Creating CNI manager for ""
	I1003 18:08:13.807303   31648 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 18:08:13.807321   31648 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 18:08:13.807344   31648 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-889240 NodeName:functional-889240 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 18:08:13.807460   31648 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-889240"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:08:13.807513   31648 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 18:08:13.814815   31648 command_runner.go:130] > kubeadm
	I1003 18:08:13.814829   31648 command_runner.go:130] > kubectl
	I1003 18:08:13.814834   31648 command_runner.go:130] > kubelet
	I1003 18:08:13.815427   31648 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:08:13.815489   31648 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 18:08:13.822648   31648 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1003 18:08:13.834615   31648 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 18:08:13.846006   31648 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1003 18:08:13.857402   31648 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1003 18:08:13.860916   31648 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1003 18:08:13.860998   31648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:08:13.942536   31648 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:08:13.955386   31648 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240 for IP: 192.168.49.2
	I1003 18:08:13.955406   31648 certs.go:195] generating shared ca certs ...
	I1003 18:08:13.955424   31648 certs.go:227] acquiring lock for ca certs: {Name:mk92d1e8e469cb44d9924ff8abf5ecf0a8ce4e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:08:13.955571   31648 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key
	I1003 18:08:13.955642   31648 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key
	I1003 18:08:13.955660   31648 certs.go:257] generating profile certs ...
	I1003 18:08:13.955770   31648 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.key
	I1003 18:08:13.955933   31648 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.key.eb3f8f7c
	I1003 18:08:13.956034   31648 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.key
	I1003 18:08:13.956049   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 18:08:13.956072   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 18:08:13.956090   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 18:08:13.956107   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 18:08:13.956123   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 18:08:13.956140   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 18:08:13.956160   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 18:08:13.956185   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 18:08:13.956244   31648 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem (1338 bytes)
	W1003 18:08:13.956286   31648 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212_empty.pem, impossibly tiny 0 bytes
	I1003 18:08:13.956298   31648 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:08:13.956331   31648 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:08:13.956364   31648 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:08:13.956397   31648 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem (1675 bytes)
	I1003 18:08:13.956451   31648 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:08:13.956487   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem -> /usr/share/ca-certificates/12212.pem
	I1003 18:08:13.956507   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /usr/share/ca-certificates/122122.pem
	I1003 18:08:13.956528   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:08:13.957144   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:08:13.973779   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 18:08:13.990161   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:08:14.006157   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:08:14.022253   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 18:08:14.038198   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:08:14.054095   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:08:14.069959   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 18:08:14.085810   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem --> /usr/share/ca-certificates/12212.pem (1338 bytes)
	I1003 18:08:14.101812   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /usr/share/ca-certificates/122122.pem (1708 bytes)
	I1003 18:08:14.117716   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:08:14.134093   31648 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:08:14.145835   31648 ssh_runner.go:195] Run: openssl version
	I1003 18:08:14.151369   31648 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1003 18:08:14.151660   31648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122122.pem && ln -fs /usr/share/ca-certificates/122122.pem /etc/ssl/certs/122122.pem"
	I1003 18:08:14.160011   31648 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122122.pem
	I1003 18:08:14.163572   31648 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:08:14.163595   31648 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:08:14.163631   31648 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122122.pem
	I1003 18:08:14.196823   31648 command_runner.go:130] > 3ec20f2e
	I1003 18:08:14.197073   31648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122122.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:08:14.204835   31648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:08:14.212908   31648 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:08:14.216400   31648 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:08:14.216425   31648 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:08:14.216454   31648 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:08:14.249946   31648 command_runner.go:130] > b5213941
	I1003 18:08:14.250032   31648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:08:14.257940   31648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12212.pem && ln -fs /usr/share/ca-certificates/12212.pem /etc/ssl/certs/12212.pem"
	I1003 18:08:14.266302   31648 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12212.pem
	I1003 18:08:14.269939   31648 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:08:14.269964   31648 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:08:14.270013   31648 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12212.pem
	I1003 18:08:14.303247   31648 command_runner.go:130] > 51391683
	I1003 18:08:14.303479   31648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12212.pem /etc/ssl/certs/51391683.0"
	I1003 18:08:14.311263   31648 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:08:14.314772   31648 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:08:14.314798   31648 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1003 18:08:14.314807   31648 command_runner.go:130] > Device: 8,1	Inode: 579409      Links: 1
	I1003 18:08:14.314815   31648 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1003 18:08:14.314823   31648 command_runner.go:130] > Access: 2025-10-03 18:04:07.266428775 +0000
	I1003 18:08:14.314828   31648 command_runner.go:130] > Modify: 2025-10-03 18:00:02.305264452 +0000
	I1003 18:08:14.314842   31648 command_runner.go:130] > Change: 2025-10-03 18:00:02.305264452 +0000
	I1003 18:08:14.314851   31648 command_runner.go:130] >  Birth: 2025-10-03 18:00:02.305264452 +0000
	I1003 18:08:14.314920   31648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 18:08:14.349195   31648 command_runner.go:130] > Certificate will not expire
	I1003 18:08:14.349493   31648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 18:08:14.382820   31648 command_runner.go:130] > Certificate will not expire
	I1003 18:08:14.383063   31648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 18:08:14.416849   31648 command_runner.go:130] > Certificate will not expire
	I1003 18:08:14.416933   31648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 18:08:14.450508   31648 command_runner.go:130] > Certificate will not expire
	I1003 18:08:14.450572   31648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 18:08:14.483927   31648 command_runner.go:130] > Certificate will not expire
	I1003 18:08:14.484012   31648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 18:08:14.517658   31648 command_runner.go:130] > Certificate will not expire
	I1003 18:08:14.518008   31648 kubeadm.go:400] StartCluster: {Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:08:14.518097   31648 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:08:14.518174   31648 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:08:14.544326   31648 cri.go:89] found id: ""
	I1003 18:08:14.544381   31648 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:08:14.551440   31648 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1003 18:08:14.551457   31648 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1003 18:08:14.551463   31648 command_runner.go:130] > /var/lib/minikube/etcd:
	I1003 18:08:14.551962   31648 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1003 18:08:14.551995   31648 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1003 18:08:14.552044   31648 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 18:08:14.559024   31648 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:08:14.559104   31648 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-889240" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:08:14.559135   31648 kubeconfig.go:62] /home/jenkins/minikube-integration/21625-8669/kubeconfig needs updating (will repair): [kubeconfig missing "functional-889240" cluster setting kubeconfig missing "functional-889240" context setting]
	I1003 18:08:14.559426   31648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/kubeconfig: {Name:mk6b7939515483ba69c1f358a3a21494f4ead7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:08:14.562686   31648 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:08:14.562840   31648 kapi.go:59] client config for functional-889240: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.key", CAFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 18:08:14.563280   31648 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1003 18:08:14.563295   31648 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1003 18:08:14.563300   31648 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1003 18:08:14.563305   31648 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1003 18:08:14.563310   31648 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1003 18:08:14.563344   31648 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1003 18:08:14.563668   31648 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 18:08:14.571379   31648 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1003 18:08:14.571411   31648 kubeadm.go:601] duration metric: took 19.407047ms to restartPrimaryControlPlane
	I1003 18:08:14.571423   31648 kubeadm.go:402] duration metric: took 53.42211ms to StartCluster
	I1003 18:08:14.571440   31648 settings.go:142] acquiring lock: {Name:mk6bc950503a8f341b8aacc07a8bc72d5db3a25c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:08:14.571546   31648 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:08:14.572080   31648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/kubeconfig: {Name:mk6b7939515483ba69c1f358a3a21494f4ead7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:08:14.572261   31648 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 18:08:14.572328   31648 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 18:08:14.572418   31648 addons.go:69] Setting storage-provisioner=true in profile "functional-889240"
	I1003 18:08:14.572440   31648 addons.go:238] Setting addon storage-provisioner=true in "functional-889240"
	I1003 18:08:14.572443   31648 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:08:14.572454   31648 addons.go:69] Setting default-storageclass=true in profile "functional-889240"
	I1003 18:08:14.572472   31648 host.go:66] Checking if "functional-889240" exists ...
	I1003 18:08:14.572481   31648 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-889240"
	I1003 18:08:14.572708   31648 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
	I1003 18:08:14.572822   31648 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
	I1003 18:08:14.574934   31648 out.go:179] * Verifying Kubernetes components...
	I1003 18:08:14.575948   31648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:08:14.591352   31648 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:08:14.591562   31648 kapi.go:59] client config for functional-889240: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.key", CAFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 18:08:14.591895   31648 addons.go:238] Setting addon default-storageclass=true in "functional-889240"
	I1003 18:08:14.591927   31648 host.go:66] Checking if "functional-889240" exists ...
	I1003 18:08:14.592300   31648 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
	I1003 18:08:14.592939   31648 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 18:08:14.594638   31648 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:14.594655   31648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 18:08:14.594693   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:14.617423   31648 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:14.617446   31648 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 18:08:14.617507   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:14.620273   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:14.639039   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:14.672807   31648 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:08:14.684788   31648 node_ready.go:35] waiting up to 6m0s for node "functional-889240" to be "Ready" ...
	I1003 18:08:14.684921   31648 type.go:168] "Request Body" body=""
	I1003 18:08:14.685003   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:14.685252   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:14.730950   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:14.745066   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:14.786328   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:14.786378   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:14.786409   31648 retry.go:31] will retry after 270.951246ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:14.798186   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:14.798232   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:14.798258   31648 retry.go:31] will retry after 360.152106ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.057602   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:15.106841   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:15.109109   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.109138   31648 retry.go:31] will retry after 397.537911ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.159331   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:15.185817   31648 type.go:168] "Request Body" body=""
	I1003 18:08:15.185883   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:15.186219   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:15.210176   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:15.210221   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.210238   31648 retry.go:31] will retry after 493.012433ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.507675   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:15.555577   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:15.557666   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.557696   31648 retry.go:31] will retry after 440.122822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.685949   31648 type.go:168] "Request Body" body=""
	I1003 18:08:15.686038   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:15.686370   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:15.703496   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:15.753710   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:15.753758   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.753776   31648 retry.go:31] will retry after 795.152031ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.998073   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:16.047743   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:16.047782   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:16.047802   31648 retry.go:31] will retry after 705.62402ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:16.185279   31648 type.go:168] "Request Body" body=""
	I1003 18:08:16.185360   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:16.185691   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:16.549101   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:16.597196   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:16.599345   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:16.599377   31648 retry.go:31] will retry after 940.255489ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:16.685633   31648 type.go:168] "Request Body" body=""
	I1003 18:08:16.685701   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:16.685999   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:16.686058   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:16.754204   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:16.801452   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:16.803457   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:16.803489   31648 retry.go:31] will retry after 1.24021873s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:17.184970   31648 type.go:168] "Request Body" body=""
	I1003 18:08:17.185063   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:17.185424   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:17.539832   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:17.590758   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:17.590802   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:17.590823   31648 retry.go:31] will retry after 1.395425458s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:17.685012   31648 type.go:168] "Request Body" body=""
	I1003 18:08:17.685095   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:17.685454   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:18.043958   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:18.094735   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:18.094776   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:18.094793   31648 retry.go:31] will retry after 1.596032935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:18.185003   31648 type.go:168] "Request Body" body=""
	I1003 18:08:18.185100   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:18.185407   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:18.685017   31648 type.go:168] "Request Body" body=""
	I1003 18:08:18.685100   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:18.685393   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:18.986876   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:19.035593   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:19.038332   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:19.038363   31648 retry.go:31] will retry after 1.200373965s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:19.185671   31648 type.go:168] "Request Body" body=""
	I1003 18:08:19.185764   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:19.186105   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:19.186155   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:19.686009   31648 type.go:168] "Request Body" body=""
	I1003 18:08:19.686091   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:19.686423   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:19.691557   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:19.741190   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:19.743532   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:19.743567   31648 retry.go:31] will retry after 3.569328126s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:20.185118   31648 type.go:168] "Request Body" body=""
	I1003 18:08:20.185184   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:20.185523   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:20.239734   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:20.289529   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:20.291706   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:20.291741   31648 retry.go:31] will retry after 1.81500567s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:20.685251   31648 type.go:168] "Request Body" body=""
	I1003 18:08:20.685325   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:20.685635   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:21.185510   31648 type.go:168] "Request Body" body=""
	I1003 18:08:21.185583   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:21.185888   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:21.685727   31648 type.go:168] "Request Body" body=""
	I1003 18:08:21.685836   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:21.686208   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:21.686275   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:22.107768   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:22.158032   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:22.158081   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:22.158100   31648 retry.go:31] will retry after 3.676335527s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:22.185231   31648 type.go:168] "Request Body" body=""
	I1003 18:08:22.185319   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:22.185614   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:22.685370   31648 type.go:168] "Request Body" body=""
	I1003 18:08:22.685451   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:22.685806   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:23.185639   31648 type.go:168] "Request Body" body=""
	I1003 18:08:23.185743   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:23.186048   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:23.313354   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:23.364461   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:23.364519   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:23.364543   31648 retry.go:31] will retry after 3.926696561s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:23.685958   31648 type.go:168] "Request Body" body=""
	I1003 18:08:23.686044   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:23.686339   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:23.686396   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:24.186039   31648 type.go:168] "Request Body" body=""
	I1003 18:08:24.186135   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:24.186455   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:24.685152   31648 type.go:168] "Request Body" body=""
	I1003 18:08:24.685228   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:24.685576   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:25.185310   31648 type.go:168] "Request Body" body=""
	I1003 18:08:25.185375   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:25.185715   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:25.685392   31648 type.go:168] "Request Body" body=""
	I1003 18:08:25.685465   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:25.685774   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:25.835120   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:25.883846   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:25.886330   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:25.886360   31648 retry.go:31] will retry after 9.086319041s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:26.185864   31648 type.go:168] "Request Body" body=""
	I1003 18:08:26.185950   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:26.186312   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:26.186362   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:26.685071   31648 type.go:168] "Request Body" body=""
	I1003 18:08:26.685149   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:26.685486   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:27.185231   31648 type.go:168] "Request Body" body=""
	I1003 18:08:27.185303   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:27.185670   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:27.291951   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:27.344646   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:27.344705   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:27.344728   31648 retry.go:31] will retry after 9.233335187s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:27.685027   31648 type.go:168] "Request Body" body=""
	I1003 18:08:27.685131   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:27.685438   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:28.185051   31648 type.go:168] "Request Body" body=""
	I1003 18:08:28.185123   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:28.185416   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:28.685061   31648 type.go:168] "Request Body" body=""
	I1003 18:08:28.685136   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:28.685436   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:28.685488   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:29.185050   31648 type.go:168] "Request Body" body=""
	I1003 18:08:29.185116   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:29.185410   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:29.685011   31648 type.go:168] "Request Body" body=""
	I1003 18:08:29.685107   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:29.685414   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:30.185028   31648 type.go:168] "Request Body" body=""
	I1003 18:08:30.185114   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:30.185401   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:30.685020   31648 type.go:168] "Request Body" body=""
	I1003 18:08:30.685097   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:30.685428   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:31.185273   31648 type.go:168] "Request Body" body=""
	I1003 18:08:31.185345   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:31.185680   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:31.185733   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:31.685419   31648 type.go:168] "Request Body" body=""
	I1003 18:08:31.685507   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:31.685800   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:32.185743   31648 type.go:168] "Request Body" body=""
	I1003 18:08:32.185852   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:32.186217   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:32.684952   31648 type.go:168] "Request Body" body=""
	I1003 18:08:32.685038   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:32.685332   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:33.185084   31648 type.go:168] "Request Body" body=""
	I1003 18:08:33.185176   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:33.185536   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:33.685288   31648 type.go:168] "Request Body" body=""
	I1003 18:08:33.685369   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:33.685664   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:33.685725   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:34.185445   31648 type.go:168] "Request Body" body=""
	I1003 18:08:34.185522   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:34.185879   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:34.685599   31648 type.go:168] "Request Body" body=""
	I1003 18:08:34.685698   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:34.686052   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:34.973491   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:35.025995   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:35.026042   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:35.026060   31648 retry.go:31] will retry after 13.835197481s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:35.185336   31648 type.go:168] "Request Body" body=""
	I1003 18:08:35.185419   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:35.185713   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:35.685344   31648 type.go:168] "Request Body" body=""
	I1003 18:08:35.685434   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:35.685770   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:35.685857   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:36.185648   31648 type.go:168] "Request Body" body=""
	I1003 18:08:36.185719   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:36.186013   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:36.578491   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:36.629045   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:36.629094   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:36.629123   31648 retry.go:31] will retry after 7.439097167s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:36.685279   31648 type.go:168] "Request Body" body=""
	I1003 18:08:36.685356   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:36.685671   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:37.185440   31648 type.go:168] "Request Body" body=""
	I1003 18:08:37.185503   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:37.185805   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:37.685609   31648 type.go:168] "Request Body" body=""
	I1003 18:08:37.685705   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:37.686055   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:37.686118   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:38.185875   31648 type.go:168] "Request Body" body=""
	I1003 18:08:38.185966   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:38.186273   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:38.685047   31648 type.go:168] "Request Body" body=""
	I1003 18:08:38.685111   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:38.685422   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:39.185132   31648 type.go:168] "Request Body" body=""
	I1003 18:08:39.185219   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:39.185524   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:39.685244   31648 type.go:168] "Request Body" body=""
	I1003 18:08:39.685308   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:39.685620   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:40.185346   31648 type.go:168] "Request Body" body=""
	I1003 18:08:40.185409   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:40.185703   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:40.185782   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:40.685452   31648 type.go:168] "Request Body" body=""
	I1003 18:08:40.685560   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:40.685889   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:41.185504   31648 type.go:168] "Request Body" body=""
	I1003 18:08:41.185583   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:41.185889   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:41.685695   31648 type.go:168] "Request Body" body=""
	I1003 18:08:41.685767   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:41.686090   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:42.185782   31648 type.go:168] "Request Body" body=""
	I1003 18:08:42.185862   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:42.186224   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:42.186281   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:42.685859   31648 type.go:168] "Request Body" body=""
	I1003 18:08:42.685952   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:42.686271   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:43.185893   31648 type.go:168] "Request Body" body=""
	I1003 18:08:43.185999   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:43.186296   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:43.685944   31648 type.go:168] "Request Body" body=""
	I1003 18:08:43.686017   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:43.686309   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:44.068807   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:44.118932   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:44.118993   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:44.119018   31648 retry.go:31] will retry after 11.649333138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:44.185207   31648 type.go:168] "Request Body" body=""
	I1003 18:08:44.185271   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:44.185562   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:44.685354   31648 type.go:168] "Request Body" body=""
	I1003 18:08:44.685421   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:44.685759   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:44.685811   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:45.185341   31648 type.go:168] "Request Body" body=""
	I1003 18:08:45.185433   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:45.185739   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:45.685457   31648 type.go:168] "Request Body" body=""
	I1003 18:08:45.685529   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:45.685878   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:46.185715   31648 type.go:168] "Request Body" body=""
	I1003 18:08:46.185814   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:46.186178   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:46.685956   31648 type.go:168] "Request Body" body=""
	I1003 18:08:46.686040   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:46.686342   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:46.686417   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:47.185108   31648 type.go:168] "Request Body" body=""
	I1003 18:08:47.185173   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:47.185454   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:47.685185   31648 type.go:168] "Request Body" body=""
	I1003 18:08:47.685263   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:47.685629   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:48.185337   31648 type.go:168] "Request Body" body=""
	I1003 18:08:48.185401   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:48.185716   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:48.685423   31648 type.go:168] "Request Body" body=""
	I1003 18:08:48.685491   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:48.685791   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:48.862137   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:48.911551   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:48.911612   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:48.911635   31648 retry.go:31] will retry after 10.230842759s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:49.184986   31648 type.go:168] "Request Body" body=""
	I1003 18:08:49.185056   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:49.185386   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:49.185450   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:49.685132   31648 type.go:168] "Request Body" body=""
	I1003 18:08:49.685197   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:49.685528   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:50.185253   31648 type.go:168] "Request Body" body=""
	I1003 18:08:50.185319   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:50.185649   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:50.685352   31648 type.go:168] "Request Body" body=""
	I1003 18:08:50.685456   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:50.685777   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:51.185614   31648 type.go:168] "Request Body" body=""
	I1003 18:08:51.185727   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:51.186089   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:51.186142   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:51.685865   31648 type.go:168] "Request Body" body=""
	I1003 18:08:51.685970   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:51.686292   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:52.185039   31648 type.go:168] "Request Body" body=""
	I1003 18:08:52.185145   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:52.185488   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:52.685238   31648 type.go:168] "Request Body" body=""
	I1003 18:08:52.685302   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:52.685617   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:53.185313   31648 type.go:168] "Request Body" body=""
	I1003 18:08:53.185377   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:53.185697   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:53.685459   31648 type.go:168] "Request Body" body=""
	I1003 18:08:53.685528   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:53.685880   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:53.685930   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:54.185736   31648 type.go:168] "Request Body" body=""
	I1003 18:08:54.185800   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:54.186122   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:54.685875   31648 type.go:168] "Request Body" body=""
	I1003 18:08:54.685940   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:54.686284   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:55.185038   31648 type.go:168] "Request Body" body=""
	I1003 18:08:55.185103   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:55.185420   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:55.685122   31648 type.go:168] "Request Body" body=""
	I1003 18:08:55.685213   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:55.685505   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:55.768789   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:55.820187   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:55.820247   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:55.820271   31648 retry.go:31] will retry after 17.817355848s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:56.185846   31648 type.go:168] "Request Body" body=""
	I1003 18:08:56.185913   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:56.186233   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:56.186374   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:56.685948   31648 type.go:168] "Request Body" body=""
	I1003 18:08:56.686081   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:56.686423   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:57.185019   31648 type.go:168] "Request Body" body=""
	I1003 18:08:57.185105   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:57.185399   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:57.684931   31648 type.go:168] "Request Body" body=""
	I1003 18:08:57.685041   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:57.685319   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:58.185047   31648 type.go:168] "Request Body" body=""
	I1003 18:08:58.185109   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:58.185402   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:58.685125   31648 type.go:168] "Request Body" body=""
	I1003 18:08:58.685211   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:58.685543   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:58.685617   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:59.143069   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:59.185821   31648 type.go:168] "Request Body" body=""
	I1003 18:08:59.185917   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:59.186232   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:59.193474   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:59.193510   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:59.193527   31648 retry.go:31] will retry after 25.255183485s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:59.685108   31648 type.go:168] "Request Body" body=""
	I1003 18:08:59.685198   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:59.685504   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:00.185069   31648 type.go:168] "Request Body" body=""
	I1003 18:09:00.185163   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:00.185465   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:00.685045   31648 type.go:168] "Request Body" body=""
	I1003 18:09:00.685107   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:00.685401   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:01.185250   31648 type.go:168] "Request Body" body=""
	I1003 18:09:01.185349   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:01.185688   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:01.185754   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:01.685310   31648 type.go:168] "Request Body" body=""
	I1003 18:09:01.685402   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:01.685720   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:02.185253   31648 type.go:168] "Request Body" body=""
	I1003 18:09:02.185346   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:02.185664   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:02.685182   31648 type.go:168] "Request Body" body=""
	I1003 18:09:02.685247   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:02.685567   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:03.185121   31648 type.go:168] "Request Body" body=""
	I1003 18:09:03.185184   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:03.185472   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:03.685069   31648 type.go:168] "Request Body" body=""
	I1003 18:09:03.685140   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:03.685473   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:03.685548   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:04.185138   31648 type.go:168] "Request Body" body=""
	I1003 18:09:04.185208   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:04.185511   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:04.685397   31648 type.go:168] "Request Body" body=""
	I1003 18:09:04.685498   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:04.685815   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:05.185368   31648 type.go:168] "Request Body" body=""
	I1003 18:09:05.185430   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:05.185752   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:05.685306   31648 type.go:168] "Request Body" body=""
	I1003 18:09:05.685399   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:05.685722   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:05.685773   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:06.185506   31648 type.go:168] "Request Body" body=""
	I1003 18:09:06.185596   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:06.185889   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:06.685509   31648 type.go:168] "Request Body" body=""
	I1003 18:09:06.685600   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:06.685920   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:07.185528   31648 type.go:168] "Request Body" body=""
	I1003 18:09:07.185591   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:07.185930   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:07.685592   31648 type.go:168] "Request Body" body=""
	I1003 18:09:07.685666   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:07.686000   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:07.686050   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:08.185578   31648 type.go:168] "Request Body" body=""
	I1003 18:09:08.185676   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:08.185969   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:08.685655   31648 type.go:168] "Request Body" body=""
	I1003 18:09:08.685728   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:08.686124   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:09.185744   31648 type.go:168] "Request Body" body=""
	I1003 18:09:09.185811   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:09.186109   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:09.685870   31648 type.go:168] "Request Body" body=""
	I1003 18:09:09.685938   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:09.686249   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:09.686300   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:10.185899   31648 type.go:168] "Request Body" body=""
	I1003 18:09:10.185995   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:10.186296   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:10.684943   31648 type.go:168] "Request Body" body=""
	I1003 18:09:10.685033   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:10.685323   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:11.185004   31648 type.go:168] "Request Body" body=""
	I1003 18:09:11.185066   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:11.185370   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:11.684959   31648 type.go:168] "Request Body" body=""
	I1003 18:09:11.685050   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:11.685368   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:12.184955   31648 type.go:168] "Request Body" body=""
	I1003 18:09:12.185063   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:12.185367   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:12.185420   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:12.684941   31648 type.go:168] "Request Body" body=""
	I1003 18:09:12.685054   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:12.685356   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:13.185955   31648 type.go:168] "Request Body" body=""
	I1003 18:09:13.186031   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:13.186349   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:13.637912   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:09:13.685539   31648 type.go:168] "Request Body" body=""
	I1003 18:09:13.685624   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:13.685989   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:13.686249   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:09:13.688536   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:09:13.688567   31648 retry.go:31] will retry after 16.395640375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:09:14.185086   31648 type.go:168] "Request Body" body=""
	I1003 18:09:14.185158   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:14.185474   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:14.185528   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:14.685417   31648 type.go:168] "Request Body" body=""
	I1003 18:09:14.685504   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:14.685861   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:15.185730   31648 type.go:168] "Request Body" body=""
	I1003 18:09:15.185803   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:15.186135   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:15.685950   31648 type.go:168] "Request Body" body=""
	I1003 18:09:15.686047   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:15.686390   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:16.185313   31648 type.go:168] "Request Body" body=""
	I1003 18:09:16.185381   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:16.185711   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:16.185784   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:16.685449   31648 type.go:168] "Request Body" body=""
	I1003 18:09:16.685527   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:16.685889   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:17.185723   31648 type.go:168] "Request Body" body=""
	I1003 18:09:17.185815   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:17.186154   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:17.685963   31648 type.go:168] "Request Body" body=""
	I1003 18:09:17.686103   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:17.686430   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:18.185163   31648 type.go:168] "Request Body" body=""
	I1003 18:09:18.185228   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:18.185536   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:18.685287   31648 type.go:168] "Request Body" body=""
	I1003 18:09:18.685398   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:18.685756   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:18.685818   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:19.185602   31648 type.go:168] "Request Body" body=""
	I1003 18:09:19.185674   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:19.186025   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:19.685824   31648 type.go:168] "Request Body" body=""
	I1003 18:09:19.685902   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:19.686264   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:20.185104   31648 type.go:168] "Request Body" body=""
	I1003 18:09:20.185178   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:20.185565   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:20.685343   31648 type.go:168] "Request Body" body=""
	I1003 18:09:20.685448   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:20.685814   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:20.685865   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:21.185641   31648 type.go:168] "Request Body" body=""
	I1003 18:09:21.185717   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:21.186091   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:21.685899   31648 type.go:168] "Request Body" body=""
	I1003 18:09:21.686019   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:21.686347   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:22.185083   31648 type.go:168] "Request Body" body=""
	I1003 18:09:22.185175   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:22.185486   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:22.685245   31648 type.go:168] "Request Body" body=""
	I1003 18:09:22.685334   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:22.685730   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:23.185497   31648 type.go:168] "Request Body" body=""
	I1003 18:09:23.185562   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:23.185880   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:23.185935   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:23.685744   31648 type.go:168] "Request Body" body=""
	I1003 18:09:23.685811   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:23.686201   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:24.184964   31648 type.go:168] "Request Body" body=""
	I1003 18:09:24.185078   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:24.185397   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:24.449821   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:09:24.497529   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:09:24.499857   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:09:24.499886   31648 retry.go:31] will retry after 48.383287224s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:09:24.685468   31648 type.go:168] "Request Body" body=""
	I1003 18:09:24.685534   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:24.685867   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:25.185654   31648 type.go:168] "Request Body" body=""
	I1003 18:09:25.185748   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:25.186075   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:25.186127   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:25.685902   31648 type.go:168] "Request Body" body=""
	I1003 18:09:25.685999   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:25.686299   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:26.185018   31648 type.go:168] "Request Body" body=""
	I1003 18:09:26.185106   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:26.185414   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:26.685136   31648 type.go:168] "Request Body" body=""
	I1003 18:09:26.685216   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:26.685515   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:27.185253   31648 type.go:168] "Request Body" body=""
	I1003 18:09:27.185318   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:27.185650   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:27.685386   31648 type.go:168] "Request Body" body=""
	I1003 18:09:27.685451   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:27.685791   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:27.685845   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:28.185583   31648 type.go:168] "Request Body" body=""
	I1003 18:09:28.185675   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:28.186015   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:28.685836   31648 type.go:168] "Request Body" body=""
	I1003 18:09:28.685940   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:28.686317   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:29.185053   31648 type.go:168] "Request Body" body=""
	I1003 18:09:29.185118   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:29.185421   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:29.685145   31648 type.go:168] "Request Body" body=""
	I1003 18:09:29.685239   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:29.685545   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:30.085101   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:09:30.133826   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:09:30.136048   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:09:30.136077   31648 retry.go:31] will retry after 44.319890963s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:09:30.185379   31648 type.go:168] "Request Body" body=""
	I1003 18:09:30.185467   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:30.185752   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:30.185824   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:30.685605   31648 type.go:168] "Request Body" body=""
	I1003 18:09:30.685677   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:30.686026   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:31.185741   31648 type.go:168] "Request Body" body=""
	I1003 18:09:31.185821   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:31.186131   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:31.685990   31648 type.go:168] "Request Body" body=""
	I1003 18:09:31.686102   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:31.686418   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:32.185174   31648 type.go:168] "Request Body" body=""
	I1003 18:09:32.185268   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:32.185574   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:32.685346   31648 type.go:168] "Request Body" body=""
	I1003 18:09:32.685414   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:32.685749   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:32.685798   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:33.185523   31648 type.go:168] "Request Body" body=""
	I1003 18:09:33.185630   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:33.185973   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:33.685847   31648 type.go:168] "Request Body" body=""
	I1003 18:09:33.685917   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:33.686290   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:34.185044   31648 type.go:168] "Request Body" body=""
	I1003 18:09:34.185158   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:34.185479   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:34.685329   31648 type.go:168] "Request Body" body=""
	I1003 18:09:34.685395   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:34.685778   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:34.685850   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:35.185617   31648 type.go:168] "Request Body" body=""
	I1003 18:09:35.185711   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:35.186046   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:35.685845   31648 type.go:168] "Request Body" body=""
	I1003 18:09:35.685931   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:35.686261   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:36.184952   31648 type.go:168] "Request Body" body=""
	I1003 18:09:36.185036   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:36.185378   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:36.685083   31648 type.go:168] "Request Body" body=""
	I1003 18:09:36.685158   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:36.685526   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:37.185252   31648 type.go:168] "Request Body" body=""
	I1003 18:09:37.185333   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:37.185680   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:37.185740   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:37.685420   31648 type.go:168] "Request Body" body=""
	I1003 18:09:37.685494   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:37.685856   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:38.185680   31648 type.go:168] "Request Body" body=""
	I1003 18:09:38.185779   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:38.186105   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:38.685935   31648 type.go:168] "Request Body" body=""
	I1003 18:09:38.686035   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:38.686351   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:39.185118   31648 type.go:168] "Request Body" body=""
	I1003 18:09:39.185189   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:39.185487   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:39.685188   31648 type.go:168] "Request Body" body=""
	I1003 18:09:39.685265   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:39.685570   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:39.685631   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:40.185362   31648 type.go:168] "Request Body" body=""
	I1003 18:09:40.185457   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:40.185802   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:40.685609   31648 type.go:168] "Request Body" body=""
	I1003 18:09:40.685713   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:40.686101   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:41.186030   31648 type.go:168] "Request Body" body=""
	I1003 18:09:41.186101   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:41.186433   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:41.685075   31648 type.go:168] "Request Body" body=""
	I1003 18:09:41.685142   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:41.685469   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:42.185193   31648 type.go:168] "Request Body" body=""
	I1003 18:09:42.185257   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:42.185565   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:42.185630   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:42.685077   31648 type.go:168] "Request Body" body=""
	I1003 18:09:42.685172   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:42.685483   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:43.185219   31648 type.go:168] "Request Body" body=""
	I1003 18:09:43.185289   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:43.185605   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:43.685108   31648 type.go:168] "Request Body" body=""
	I1003 18:09:43.685175   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:43.685496   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:44.185214   31648 type.go:168] "Request Body" body=""
	I1003 18:09:44.185314   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:44.185626   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:44.185696   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:44.685443   31648 type.go:168] "Request Body" body=""
	I1003 18:09:44.685535   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:44.685860   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:45.185669   31648 type.go:168] "Request Body" body=""
	I1003 18:09:45.185734   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:45.186050   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:45.685869   31648 type.go:168] "Request Body" body=""
	I1003 18:09:45.685940   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:45.686258   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:46.184960   31648 type.go:168] "Request Body" body=""
	I1003 18:09:46.185084   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:46.185423   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:46.685149   31648 type.go:168] "Request Body" body=""
	I1003 18:09:46.685219   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:46.685543   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:46.685599   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:47.185302   31648 type.go:168] "Request Body" body=""
	I1003 18:09:47.185370   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:47.185710   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:47.685432   31648 type.go:168] "Request Body" body=""
	I1003 18:09:47.685496   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:47.685808   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:48.185599   31648 type.go:168] "Request Body" body=""
	I1003 18:09:48.185663   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:48.186043   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:48.685839   31648 type.go:168] "Request Body" body=""
	I1003 18:09:48.685931   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:48.686255   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:48.686305   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:49.185022   31648 type.go:168] "Request Body" body=""
	I1003 18:09:49.185091   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:49.185409   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:49.685097   31648 type.go:168] "Request Body" body=""
	I1003 18:09:49.685189   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:49.685510   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:50.185245   31648 type.go:168] "Request Body" body=""
	I1003 18:09:50.185317   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:50.185675   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:50.685396   31648 type.go:168] "Request Body" body=""
	I1003 18:09:50.685460   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:50.685814   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:51.185668   31648 type.go:168] "Request Body" body=""
	I1003 18:09:51.185757   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:51.186064   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:51.186116   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:51.685866   31648 type.go:168] "Request Body" body=""
	I1003 18:09:51.685934   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:51.686277   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:52.185003   31648 type.go:168] "Request Body" body=""
	I1003 18:09:52.185067   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:52.185368   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:52.685121   31648 type.go:168] "Request Body" body=""
	I1003 18:09:52.685219   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:52.685573   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:53.185280   31648 type.go:168] "Request Body" body=""
	I1003 18:09:53.185339   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:53.185633   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:53.685331   31648 type.go:168] "Request Body" body=""
	I1003 18:09:53.685395   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:53.685759   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:53.685836   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:54.185620   31648 type.go:168] "Request Body" body=""
	I1003 18:09:54.185691   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:54.186007   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:54.685714   31648 type.go:168] "Request Body" body=""
	I1003 18:09:54.685778   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:54.686135   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:55.185951   31648 type.go:168] "Request Body" body=""
	I1003 18:09:55.186058   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:55.186387   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:55.685101   31648 type.go:168] "Request Body" body=""
	I1003 18:09:55.685193   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:55.685564   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:56.185405   31648 type.go:168] "Request Body" body=""
	I1003 18:09:56.185491   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:56.185823   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:56.185874   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:56.685614   31648 type.go:168] "Request Body" body=""
	I1003 18:09:56.685702   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:56.686026   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:57.185904   31648 type.go:168] "Request Body" body=""
	I1003 18:09:57.186000   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:57.186336   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:57.685087   31648 type.go:168] "Request Body" body=""
	I1003 18:09:57.685160   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:57.685447   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:58.185160   31648 type.go:168] "Request Body" body=""
	I1003 18:09:58.185246   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:58.185558   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:58.685303   31648 type.go:168] "Request Body" body=""
	I1003 18:09:58.685365   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:58.685671   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:58.685755   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:59.185446   31648 type.go:168] "Request Body" body=""
	I1003 18:09:59.185545   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:59.185914   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:59.685737   31648 type.go:168] "Request Body" body=""
	I1003 18:09:59.685801   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:59.686146   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:00.185972   31648 type.go:168] "Request Body" body=""
	I1003 18:10:00.186075   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:00.186364   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:00.685077   31648 type.go:168] "Request Body" body=""
	I1003 18:10:00.685166   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:00.685464   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:01.185382   31648 type.go:168] "Request Body" body=""
	I1003 18:10:01.185446   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:01.185778   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:01.185830   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:01.685606   31648 type.go:168] "Request Body" body=""
	I1003 18:10:01.685677   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:01.686032   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:02.185907   31648 type.go:168] "Request Body" body=""
	I1003 18:10:02.186020   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:02.186378   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:02.685091   31648 type.go:168] "Request Body" body=""
	I1003 18:10:02.685152   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:02.685445   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:03.185142   31648 type.go:168] "Request Body" body=""
	I1003 18:10:03.185225   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:03.185561   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:03.685236   31648 type.go:168] "Request Body" body=""
	I1003 18:10:03.685339   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:03.685634   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:03.685696   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:04.185365   31648 type.go:168] "Request Body" body=""
	I1003 18:10:04.185433   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:04.185727   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:04.685562   31648 type.go:168] "Request Body" body=""
	I1003 18:10:04.685630   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:04.686027   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:05.185808   31648 type.go:168] "Request Body" body=""
	I1003 18:10:05.185875   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:05.186210   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:05.686012   31648 type.go:168] "Request Body" body=""
	I1003 18:10:05.686094   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:05.686420   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:05.686513   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:06.185220   31648 type.go:168] "Request Body" body=""
	I1003 18:10:06.185317   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:06.185670   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:06.685370   31648 type.go:168] "Request Body" body=""
	I1003 18:10:06.685434   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:06.685727   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:07.185434   31648 type.go:168] "Request Body" body=""
	I1003 18:10:07.185512   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:07.185878   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:07.685679   31648 type.go:168] "Request Body" body=""
	I1003 18:10:07.685748   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:07.686309   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:08.185067   31648 type.go:168] "Request Body" body=""
	I1003 18:10:08.185137   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:08.185459   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:08.185516   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:08.685191   31648 type.go:168] "Request Body" body=""
	I1003 18:10:08.685261   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:08.685582   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:09.185329   31648 type.go:168] "Request Body" body=""
	I1003 18:10:09.185397   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:09.185705   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:09.685441   31648 type.go:168] "Request Body" body=""
	I1003 18:10:09.685504   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:09.685840   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:10.185620   31648 type.go:168] "Request Body" body=""
	I1003 18:10:10.185689   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:10.186037   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:10.186087   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:10.685838   31648 type.go:168] "Request Body" body=""
	I1003 18:10:10.685914   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:10.686280   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:11.184954   31648 type.go:168] "Request Body" body=""
	I1003 18:10:11.185044   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:11.185353   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:11.685099   31648 type.go:168] "Request Body" body=""
	I1003 18:10:11.685168   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:11.685473   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:12.185192   31648 type.go:168] "Request Body" body=""
	I1003 18:10:12.185259   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:12.185564   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:12.685315   31648 type.go:168] "Request Body" body=""
	I1003 18:10:12.685386   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:12.685819   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:12.685875   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:12.884184   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:10:12.932382   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:10:12.934859   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:10:12.935018   31648 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1003 18:10:13.185242   31648 type.go:168] "Request Body" body=""
	I1003 18:10:13.185310   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:13.185617   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:13.685328   31648 type.go:168] "Request Body" body=""
	I1003 18:10:13.685430   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:13.685917   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:14.185730   31648 type.go:168] "Request Body" body=""
	I1003 18:10:14.185796   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:14.186122   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:14.456560   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:10:14.507486   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:10:14.509939   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:10:14.510064   31648 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1003 18:10:14.512677   31648 out.go:179] * Enabled addons: 
	I1003 18:10:14.514281   31648 addons.go:514] duration metric: took 1m59.941954445s for enable addons: enabled=[]
	I1003 18:10:14.685449   31648 type.go:168] "Request Body" body=""
	I1003 18:10:14.685516   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:14.685857   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:14.685919   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:15.185675   31648 type.go:168] "Request Body" body=""
	I1003 18:10:15.185738   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:15.186060   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:15.685871   31648 type.go:168] "Request Body" body=""
	I1003 18:10:15.685938   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:15.686263   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:16.184928   31648 type.go:168] "Request Body" body=""
	I1003 18:10:16.185033   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:16.185365   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:16.685082   31648 type.go:168] "Request Body" body=""
	I1003 18:10:16.685144   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:16.685447   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:17.185125   31648 type.go:168] "Request Body" body=""
	I1003 18:10:17.185202   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:17.185514   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:17.185563   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:17.685251   31648 type.go:168] "Request Body" body=""
	I1003 18:10:17.685320   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:17.685625   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:18.185367   31648 type.go:168] "Request Body" body=""
	I1003 18:10:18.185448   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:18.185805   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:18.685631   31648 type.go:168] "Request Body" body=""
	I1003 18:10:18.685706   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:18.686092   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:19.185904   31648 type.go:168] "Request Body" body=""
	I1003 18:10:19.185995   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:19.186318   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:19.186371   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:19.685092   31648 type.go:168] "Request Body" body=""
	I1003 18:10:19.685164   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:19.685487   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:20.185213   31648 type.go:168] "Request Body" body=""
	I1003 18:10:20.185296   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:20.185633   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:20.685398   31648 type.go:168] "Request Body" body=""
	I1003 18:10:20.685475   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:20.685780   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:21.185636   31648 type.go:168] "Request Body" body=""
	I1003 18:10:21.185711   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:21.186047   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:21.685810   31648 type.go:168] "Request Body" body=""
	I1003 18:10:21.685874   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:21.686211   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:21.686273   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:22.184932   31648 type.go:168] "Request Body" body=""
	I1003 18:10:22.185016   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:22.185357   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:22.685073   31648 type.go:168] "Request Body" body=""
	I1003 18:10:22.685138   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:22.685450   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:23.185168   31648 type.go:168] "Request Body" body=""
	I1003 18:10:23.185239   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:23.185562   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:23.685280   31648 type.go:168] "Request Body" body=""
	I1003 18:10:23.685364   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:23.685684   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:24.185432   31648 type.go:168] "Request Body" body=""
	I1003 18:10:24.185494   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:24.185826   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:24.185890   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:24.685663   31648 type.go:168] "Request Body" body=""
	I1003 18:10:24.685735   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:24.686142   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:25.185900   31648 type.go:168] "Request Body" body=""
	I1003 18:10:25.185964   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:25.186274   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:25.685013   31648 type.go:168] "Request Body" body=""
	I1003 18:10:25.685093   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:25.685422   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:26.185213   31648 type.go:168] "Request Body" body=""
	I1003 18:10:26.185323   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:26.185654   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:26.685413   31648 type.go:168] "Request Body" body=""
	I1003 18:10:26.685482   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:26.685843   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:26.685908   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:27.185654   31648 type.go:168] "Request Body" body=""
	I1003 18:10:27.185733   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:27.186080   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:27.685901   31648 type.go:168] "Request Body" body=""
	I1003 18:10:27.685968   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:27.686301   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:28.185042   31648 type.go:168] "Request Body" body=""
	I1003 18:10:28.185109   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:28.185417   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:28.685129   31648 type.go:168] "Request Body" body=""
	I1003 18:10:28.685212   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:28.685544   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:29.185277   31648 type.go:168] "Request Body" body=""
	I1003 18:10:29.185350   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:29.185667   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:29.185717   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:29.685390   31648 type.go:168] "Request Body" body=""
	I1003 18:10:29.685463   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:29.685809   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:30.185653   31648 type.go:168] "Request Body" body=""
	I1003 18:10:30.185740   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:30.186077   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:30.685885   31648 type.go:168] "Request Body" body=""
	I1003 18:10:30.685950   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:30.686302   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:31.184960   31648 type.go:168] "Request Body" body=""
	I1003 18:10:31.185039   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:31.185351   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:31.685088   31648 type.go:168] "Request Body" body=""
	I1003 18:10:31.685183   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:31.685491   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:31.685553   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:32.185245   31648 type.go:168] "Request Body" body=""
	I1003 18:10:32.185311   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:32.185616   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:32.685334   31648 type.go:168] "Request Body" body=""
	I1003 18:10:32.685427   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:32.685753   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:33.185521   31648 type.go:168] "Request Body" body=""
	I1003 18:10:33.185585   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:33.185951   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:33.685776   31648 type.go:168] "Request Body" body=""
	I1003 18:10:33.685843   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:33.686164   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:33.686226   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:34.186008   31648 type.go:168] "Request Body" body=""
	I1003 18:10:34.186076   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:34.186390   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:34.685080   31648 type.go:168] "Request Body" body=""
	I1003 18:10:34.685151   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:34.685468   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:35.185199   31648 type.go:168] "Request Body" body=""
	I1003 18:10:35.185274   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:35.185624   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:35.685334   31648 type.go:168] "Request Body" body=""
	I1003 18:10:35.685407   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:35.685728   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:36.185543   31648 type.go:168] "Request Body" body=""
	I1003 18:10:36.185617   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:36.185950   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:36.186025   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:36.685767   31648 type.go:168] "Request Body" body=""
	I1003 18:10:36.685830   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:36.686160   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:37.185965   31648 type.go:168] "Request Body" body=""
	I1003 18:10:37.186062   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:37.186419   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:37.685168   31648 type.go:168] "Request Body" body=""
	I1003 18:10:37.685233   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:37.685563   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:38.185271   31648 type.go:168] "Request Body" body=""
	I1003 18:10:38.185345   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:38.185657   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:38.685369   31648 type.go:168] "Request Body" body=""
	I1003 18:10:38.685433   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:38.685746   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:38.685800   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:39.185560   31648 type.go:168] "Request Body" body=""
	I1003 18:10:39.185640   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:39.185997   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:39.685784   31648 type.go:168] "Request Body" body=""
	I1003 18:10:39.685851   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:39.686184   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:40.185949   31648 type.go:168] "Request Body" body=""
	I1003 18:10:40.186046   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:40.186401   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:40.685071   31648 type.go:168] "Request Body" body=""
	I1003 18:10:40.685152   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:40.685459   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:41.185267   31648 type.go:168] "Request Body" body=""
	I1003 18:10:41.185334   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:41.185637   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:41.185700   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:41.685380   31648 type.go:168] "Request Body" body=""
	I1003 18:10:41.685445   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:41.685830   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:42.185632   31648 type.go:168] "Request Body" body=""
	I1003 18:10:42.185724   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:42.186063   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:42.685859   31648 type.go:168] "Request Body" body=""
	I1003 18:10:42.685933   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:42.686273   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:43.185018   31648 type.go:168] "Request Body" body=""
	I1003 18:10:43.185089   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:43.185411   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:43.685086   31648 type.go:168] "Request Body" body=""
	I1003 18:10:43.685152   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:43.685478   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:43.685542   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:44.185259   31648 type.go:168] "Request Body" body=""
	I1003 18:10:44.185327   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:44.185679   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:44.685473   31648 type.go:168] "Request Body" body=""
	I1003 18:10:44.685537   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:44.685872   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:45.185684   31648 type.go:168] "Request Body" body=""
	I1003 18:10:45.185759   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:45.186086   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:45.685880   31648 type.go:168] "Request Body" body=""
	I1003 18:10:45.685945   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:45.686284   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:45.686349   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:46.184919   31648 type.go:168] "Request Body" body=""
	I1003 18:10:46.185021   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:46.185345   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:46.685069   31648 type.go:168] "Request Body" body=""
	I1003 18:10:46.685147   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:46.685496   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:47.185204   31648 type.go:168] "Request Body" body=""
	I1003 18:10:47.185304   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:47.185613   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:47.685395   31648 type.go:168] "Request Body" body=""
	I1003 18:10:47.685473   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:47.685844   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:48.185624   31648 type.go:168] "Request Body" body=""
	I1003 18:10:48.185707   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:48.186048   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:48.186105   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:48.685861   31648 type.go:168] "Request Body" body=""
	I1003 18:10:48.685948   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:48.686324   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:49.185066   31648 type.go:168] "Request Body" body=""
	I1003 18:10:49.185176   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:49.185503   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:49.685237   31648 type.go:168] "Request Body" body=""
	I1003 18:10:49.685317   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:49.685703   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:50.185462   31648 type.go:168] "Request Body" body=""
	I1003 18:10:50.185540   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:50.185875   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:50.685684   31648 type.go:168] "Request Body" body=""
	I1003 18:10:50.685764   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:50.686154   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:50.686209   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:51.185959   31648 type.go:168] "Request Body" body=""
	I1003 18:10:51.186061   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:51.186411   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:51.685154   31648 type.go:168] "Request Body" body=""
	I1003 18:10:51.685222   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:51.685506   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:52.185254   31648 type.go:168] "Request Body" body=""
	I1003 18:10:52.185335   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:52.185690   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:52.685398   31648 type.go:168] "Request Body" body=""
	I1003 18:10:52.685466   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:52.685770   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:53.185621   31648 type.go:168] "Request Body" body=""
	I1003 18:10:53.185692   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:53.186039   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:53.186109   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:53.685850   31648 type.go:168] "Request Body" body=""
	I1003 18:10:53.685914   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:53.686255   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:54.185017   31648 type.go:168] "Request Body" body=""
	I1003 18:10:54.185080   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:54.185397   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:54.685078   31648 type.go:168] "Request Body" body=""
	I1003 18:10:54.685145   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:54.685459   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:55.185159   31648 type.go:168] "Request Body" body=""
	I1003 18:10:55.185227   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:55.185528   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:55.685211   31648 type.go:168] "Request Body" body=""
	I1003 18:10:55.685279   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:55.685586   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:55.685652   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:56.185352   31648 type.go:168] "Request Body" body=""
	I1003 18:10:56.185430   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:56.185759   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:56.685531   31648 type.go:168] "Request Body" body=""
	I1003 18:10:56.685600   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:56.685922   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:57.185723   31648 type.go:168] "Request Body" body=""
	I1003 18:10:57.185811   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:57.186156   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:57.685922   31648 type.go:168] "Request Body" body=""
	I1003 18:10:57.686010   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:57.686316   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:57.686367   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:58.185097   31648 type.go:168] "Request Body" body=""
	I1003 18:10:58.185187   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:58.185535   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:58.685089   31648 type.go:168] "Request Body" body=""
	I1003 18:10:58.685158   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:58.685458   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:59.185180   31648 type.go:168] "Request Body" body=""
	I1003 18:10:59.185260   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:59.185605   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:59.685329   31648 type.go:168] "Request Body" body=""
	I1003 18:10:59.685409   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:59.685768   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:00.185577   31648 type.go:168] "Request Body" body=""
	I1003 18:11:00.185644   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:00.185968   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:00.186053   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:00.685767   31648 type.go:168] "Request Body" body=""
	I1003 18:11:00.685853   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:00.686208   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:01.185912   31648 type.go:168] "Request Body" body=""
	I1003 18:11:01.186001   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:01.186311   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:01.685060   31648 type.go:168] "Request Body" body=""
	I1003 18:11:01.685173   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:01.685511   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:02.185272   31648 type.go:168] "Request Body" body=""
	I1003 18:11:02.185343   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:02.185674   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:02.685366   31648 type.go:168] "Request Body" body=""
	I1003 18:11:02.685447   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:02.685807   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:02.685860   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:03.185586   31648 type.go:168] "Request Body" body=""
	I1003 18:11:03.185653   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:03.186010   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:03.685810   31648 type.go:168] "Request Body" body=""
	I1003 18:11:03.685892   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:03.686241   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:04.184939   31648 type.go:168] "Request Body" body=""
	I1003 18:11:04.185023   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:04.185312   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:04.685060   31648 type.go:168] "Request Body" body=""
	I1003 18:11:04.685143   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:04.685467   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:05.185189   31648 type.go:168] "Request Body" body=""
	I1003 18:11:05.185258   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:05.185567   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:05.185625   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:05.685299   31648 type.go:168] "Request Body" body=""
	I1003 18:11:05.685378   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:05.685703   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:06.185511   31648 type.go:168] "Request Body" body=""
	I1003 18:11:06.185600   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:06.185915   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:06.685750   31648 type.go:168] "Request Body" body=""
	I1003 18:11:06.685834   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:06.686186   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:07.185989   31648 type.go:168] "Request Body" body=""
	I1003 18:11:07.186058   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:07.186369   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:07.186436   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:07.685126   31648 type.go:168] "Request Body" body=""
	I1003 18:11:07.685203   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:07.685514   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:08.185223   31648 type.go:168] "Request Body" body=""
	I1003 18:11:08.185315   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:08.185627   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:08.685356   31648 type.go:168] "Request Body" body=""
	I1003 18:11:08.685469   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:08.685819   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:09.185588   31648 type.go:168] "Request Body" body=""
	I1003 18:11:09.185655   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:09.186048   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:09.685858   31648 type.go:168] "Request Body" body=""
	I1003 18:11:09.685945   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:09.686291   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:09.686344   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:10.185028   31648 type.go:168] "Request Body" body=""
	I1003 18:11:10.185112   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:10.185419   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:10.685125   31648 type.go:168] "Request Body" body=""
	I1003 18:11:10.685235   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:10.685580   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:11.185333   31648 type.go:168] "Request Body" body=""
	I1003 18:11:11.185400   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:11.185721   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:11.685427   31648 type.go:168] "Request Body" body=""
	I1003 18:11:11.685540   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:11.685876   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:12.185659   31648 type.go:168] "Request Body" body=""
	I1003 18:11:12.185756   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:12.186078   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:12.186142   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:12.685887   31648 type.go:168] "Request Body" body=""
	I1003 18:11:12.685959   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:12.686282   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:13.185003   31648 type.go:168] "Request Body" body=""
	I1003 18:11:13.185081   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:13.185409   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:13.685094   31648 type.go:168] "Request Body" body=""
	I1003 18:11:13.685164   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:13.685478   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:14.185184   31648 type.go:168] "Request Body" body=""
	I1003 18:11:14.185260   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:14.185598   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:14.685408   31648 type.go:168] "Request Body" body=""
	I1003 18:11:14.685477   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:14.685794   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:14.685865   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:15.185614   31648 type.go:168] "Request Body" body=""
	I1003 18:11:15.185690   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:15.186097   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:15.685915   31648 type.go:168] "Request Body" body=""
	I1003 18:11:15.686020   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:15.686331   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:16.185164   31648 type.go:168] "Request Body" body=""
	I1003 18:11:16.185233   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:16.185540   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:16.685230   31648 type.go:168] "Request Body" body=""
	I1003 18:11:16.685290   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:16.685601   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:17.185312   31648 type.go:168] "Request Body" body=""
	I1003 18:11:17.185380   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:17.185697   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:17.185779   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:17.685436   31648 type.go:168] "Request Body" body=""
	I1003 18:11:17.685502   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:17.685845   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:18.185654   31648 type.go:168] "Request Body" body=""
	I1003 18:11:18.185717   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:18.186072   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:18.685861   31648 type.go:168] "Request Body" body=""
	I1003 18:11:18.685924   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:18.686240   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:19.185000   31648 type.go:168] "Request Body" body=""
	I1003 18:11:19.185076   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:19.185392   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:19.685130   31648 type.go:168] "Request Body" body=""
	I1003 18:11:19.685199   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:19.685540   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:19.685603   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:20.185304   31648 type.go:168] "Request Body" body=""
	I1003 18:11:20.185368   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:20.185692   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:20.685437   31648 type.go:168] "Request Body" body=""
	I1003 18:11:20.685512   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:20.685889   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:21.185654   31648 type.go:168] "Request Body" body=""
	I1003 18:11:21.185736   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:21.186088   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:21.685864   31648 type.go:168] "Request Body" body=""
	I1003 18:11:21.685950   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:21.686257   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:21.686310   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:22.185029   31648 type.go:168] "Request Body" body=""
	I1003 18:11:22.185128   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:22.185448   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:22.685177   31648 type.go:168] "Request Body" body=""
	I1003 18:11:22.685257   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:22.685561   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:23.185277   31648 type.go:168] "Request Body" body=""
	I1003 18:11:23.185353   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:23.185666   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:23.685362   31648 type.go:168] "Request Body" body=""
	I1003 18:11:23.685435   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:23.685751   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:24.185475   31648 type.go:168] "Request Body" body=""
	I1003 18:11:24.185552   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:24.185910   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:24.185963   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:24.685584   31648 type.go:168] "Request Body" body=""
	I1003 18:11:24.685659   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:24.685971   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:25.185758   31648 type.go:168] "Request Body" body=""
	I1003 18:11:25.185842   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:25.186204   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:25.685956   31648 type.go:168] "Request Body" body=""
	I1003 18:11:25.686040   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:25.686348   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:26.185071   31648 type.go:168] "Request Body" body=""
	I1003 18:11:26.185144   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:26.185483   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:26.685189   31648 type.go:168] "Request Body" body=""
	I1003 18:11:26.685255   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:26.685555   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:26.685624   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:27.185293   31648 type.go:168] "Request Body" body=""
	I1003 18:11:27.185364   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:27.185670   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:27.685353   31648 type.go:168] "Request Body" body=""
	I1003 18:11:27.685417   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:27.685713   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:28.185462   31648 type.go:168] "Request Body" body=""
	I1003 18:11:28.185529   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:28.185838   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:28.685636   31648 type.go:168] "Request Body" body=""
	I1003 18:11:28.685711   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:28.686033   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:28.686095   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:29.185891   31648 type.go:168] "Request Body" body=""
	I1003 18:11:29.185959   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:29.186289   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:29.684999   31648 type.go:168] "Request Body" body=""
	I1003 18:11:29.685063   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:29.685358   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:30.185079   31648 type.go:168] "Request Body" body=""
	I1003 18:11:30.185147   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:30.185448   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:30.685153   31648 type.go:168] "Request Body" body=""
	I1003 18:11:30.685224   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:30.685542   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:31.185387   31648 type.go:168] "Request Body" body=""
	I1003 18:11:31.185470   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:31.185801   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:31.185869   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:31.685601   31648 type.go:168] "Request Body" body=""
	I1003 18:11:31.685665   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:31.686013   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:32.185823   31648 type.go:168] "Request Body" body=""
	I1003 18:11:32.185918   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:32.186314   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:32.685025   31648 type.go:168] "Request Body" body=""
	I1003 18:11:32.685090   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:32.685396   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:33.185093   31648 type.go:168] "Request Body" body=""
	I1003 18:11:33.185177   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:33.185492   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:33.685174   31648 type.go:168] "Request Body" body=""
	I1003 18:11:33.685294   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:33.685598   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:33.685653   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:34.185347   31648 type.go:168] "Request Body" body=""
	I1003 18:11:34.185424   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:34.185757   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:34.685584   31648 type.go:168] "Request Body" body=""
	I1003 18:11:34.685700   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:34.686040   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:35.185805   31648 type.go:168] "Request Body" body=""
	I1003 18:11:35.185867   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:35.186199   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:35.685954   31648 type.go:168] "Request Body" body=""
	I1003 18:11:35.686050   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:35.686359   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:35.686411   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:36.185172   31648 type.go:168] "Request Body" body=""
	I1003 18:11:36.185238   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:36.185535   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:36.685215   31648 type.go:168] "Request Body" body=""
	I1003 18:11:36.685302   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:36.685612   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:37.185339   31648 type.go:168] "Request Body" body=""
	I1003 18:11:37.185403   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:37.185728   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:37.685401   31648 type.go:168] "Request Body" body=""
	I1003 18:11:37.685477   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:37.685800   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:38.185642   31648 type.go:168] "Request Body" body=""
	I1003 18:11:38.185720   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:38.186056   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:38.186115   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:38.685846   31648 type.go:168] "Request Body" body=""
	I1003 18:11:38.685908   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:38.686230   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:39.184965   31648 type.go:168] "Request Body" body=""
	I1003 18:11:39.185068   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:39.185389   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:39.685076   31648 type.go:168] "Request Body" body=""
	I1003 18:11:39.685138   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:39.685429   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:40.185151   31648 type.go:168] "Request Body" body=""
	I1003 18:11:40.185227   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:40.185552   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:40.685234   31648 type.go:168] "Request Body" body=""
	I1003 18:11:40.685299   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:40.685612   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:40.685679   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:41.185407   31648 type.go:168] "Request Body" body=""
	I1003 18:11:41.185475   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:41.185810   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:41.685588   31648 type.go:168] "Request Body" body=""
	I1003 18:11:41.685663   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:41.685999   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:42.185821   31648 type.go:168] "Request Body" body=""
	I1003 18:11:42.185909   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:42.186287   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:42.685035   31648 type.go:168] "Request Body" body=""
	I1003 18:11:42.685109   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:42.685460   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:43.185163   31648 type.go:168] "Request Body" body=""
	I1003 18:11:43.185226   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:43.185569   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:43.185640   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:43.685320   31648 type.go:168] "Request Body" body=""
	I1003 18:11:43.685387   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:43.685687   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:44.185376   31648 type.go:168] "Request Body" body=""
	I1003 18:11:44.185445   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:44.185795   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:44.685599   31648 type.go:168] "Request Body" body=""
	I1003 18:11:44.685672   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:44.686013   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:45.185797   31648 type.go:168] "Request Body" body=""
	I1003 18:11:45.185863   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:45.186210   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:45.186272   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:45.684943   31648 type.go:168] "Request Body" body=""
	I1003 18:11:45.685023   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:45.685323   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:46.184972   31648 type.go:168] "Request Body" body=""
	I1003 18:11:46.185063   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:46.185368   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:46.685078   31648 type.go:168] "Request Body" body=""
	I1003 18:11:46.685143   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:46.685436   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:47.185171   31648 type.go:168] "Request Body" body=""
	I1003 18:11:47.185237   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:47.185530   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:47.685229   31648 type.go:168] "Request Body" body=""
	I1003 18:11:47.685292   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:47.685573   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:47.685625   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:48.185308   31648 type.go:168] "Request Body" body=""
	I1003 18:11:48.185378   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:48.185726   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:48.685435   31648 type.go:168] "Request Body" body=""
	I1003 18:11:48.685502   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:48.685818   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:49.185572   31648 type.go:168] "Request Body" body=""
	I1003 18:11:49.185639   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:49.185951   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:49.685755   31648 type.go:168] "Request Body" body=""
	I1003 18:11:49.685820   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:49.686165   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:49.686226   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:50.185972   31648 type.go:168] "Request Body" body=""
	I1003 18:11:50.186049   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:50.186347   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:50.685077   31648 type.go:168] "Request Body" body=""
	I1003 18:11:50.685149   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:50.685487   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:51.185355   31648 type.go:168] "Request Body" body=""
	I1003 18:11:51.185423   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:51.185749   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:51.685438   31648 type.go:168] "Request Body" body=""
	I1003 18:11:51.685502   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:51.685808   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:52.185581   31648 type.go:168] "Request Body" body=""
	I1003 18:11:52.185644   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:52.185967   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:52.186043   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:52.685763   31648 type.go:168] "Request Body" body=""
	I1003 18:11:52.685866   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:52.686218   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:53.184953   31648 type.go:168] "Request Body" body=""
	I1003 18:11:53.185051   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:53.185365   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:53.685069   31648 type.go:168] "Request Body" body=""
	I1003 18:11:53.685143   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:53.685457   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:54.185161   31648 type.go:168] "Request Body" body=""
	I1003 18:11:54.185226   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:54.185562   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:54.685310   31648 type.go:168] "Request Body" body=""
	I1003 18:11:54.685387   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:54.685726   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:54.685776   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:55.185417   31648 type.go:168] "Request Body" body=""
	I1003 18:11:55.185483   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:55.185815   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:55.685573   31648 type.go:168] "Request Body" body=""
	I1003 18:11:55.685677   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:55.686027   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:56.185731   31648 type.go:168] "Request Body" body=""
	I1003 18:11:56.185792   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:56.186116   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:56.685906   31648 type.go:168] "Request Body" body=""
	I1003 18:11:56.686004   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:56.686321   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:56.686379   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:57.185067   31648 type.go:168] "Request Body" body=""
	I1003 18:11:57.185134   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:57.185426   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:57.685144   31648 type.go:168] "Request Body" body=""
	I1003 18:11:57.685226   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:57.685539   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:58.185226   31648 type.go:168] "Request Body" body=""
	I1003 18:11:58.185291   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:58.185597   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:58.685288   31648 type.go:168] "Request Body" body=""
	I1003 18:11:58.685373   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:58.685689   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:59.185369   31648 type.go:168] "Request Body" body=""
	I1003 18:11:59.185441   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:59.185768   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:59.185831   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:59.685575   31648 type.go:168] "Request Body" body=""
	I1003 18:11:59.685674   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:59.686024   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:00.185851   31648 type.go:168] "Request Body" body=""
	I1003 18:12:00.185922   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:00.186234   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:00.684953   31648 type.go:168] "Request Body" body=""
	I1003 18:12:00.685062   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:00.685403   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:01.185179   31648 type.go:168] "Request Body" body=""
	I1003 18:12:01.185248   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:01.185572   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:01.685293   31648 type.go:168] "Request Body" body=""
	I1003 18:12:01.685376   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:01.685710   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:01.685766   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:02.185411   31648 type.go:168] "Request Body" body=""
	I1003 18:12:02.185478   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:02.185826   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:02.685596   31648 type.go:168] "Request Body" body=""
	I1003 18:12:02.685688   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:02.686031   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:03.185821   31648 type.go:168] "Request Body" body=""
	I1003 18:12:03.185887   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:03.186235   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:03.684941   31648 type.go:168] "Request Body" body=""
	I1003 18:12:03.685043   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:03.685366   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:04.185065   31648 type.go:168] "Request Body" body=""
	I1003 18:12:04.185133   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:04.185448   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:04.185500   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:04.685256   31648 type.go:168] "Request Body" body=""
	I1003 18:12:04.685332   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:04.685650   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:05.185329   31648 type.go:168] "Request Body" body=""
	I1003 18:12:05.185398   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:05.185718   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:05.685410   31648 type.go:168] "Request Body" body=""
	I1003 18:12:05.685475   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:05.685794   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:06.185563   31648 type.go:168] "Request Body" body=""
	I1003 18:12:06.185632   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:06.185948   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:06.186035   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:06.685752   31648 type.go:168] "Request Body" body=""
	I1003 18:12:06.685824   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:06.686177   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:07.185942   31648 type.go:168] "Request Body" body=""
	I1003 18:12:07.186020   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:07.186318   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:07.685031   31648 type.go:168] "Request Body" body=""
	I1003 18:12:07.685100   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:07.685424   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:08.185310   31648 type.go:168] "Request Body" body=""
	I1003 18:12:08.185557   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:08.186174   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:08.186246   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:08.685021   31648 type.go:168] "Request Body" body=""
	I1003 18:12:08.685163   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:08.685624   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:09.185153   31648 type.go:168] "Request Body" body=""
	I1003 18:12:09.185228   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:09.185529   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:09.685080   31648 type.go:168] "Request Body" body=""
	I1003 18:12:09.685150   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:09.685445   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:10.185696   31648 type.go:168] "Request Body" body=""
	I1003 18:12:10.185761   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:10.186171   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:10.685822   31648 type.go:168] "Request Body" body=""
	I1003 18:12:10.685891   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:10.686201   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:10.686266   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:11.184920   31648 type.go:168] "Request Body" body=""
	I1003 18:12:11.185025   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:11.185378   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:11.684920   31648 type.go:168] "Request Body" body=""
	I1003 18:12:11.685033   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:11.685353   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:12.186032   31648 type.go:168] "Request Body" body=""
	I1003 18:12:12.186096   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:12.186405   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:12.685015   31648 type.go:168] "Request Body" body=""
	I1003 18:12:12.685091   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:12.685409   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:13.185019   31648 type.go:168] "Request Body" body=""
	I1003 18:12:13.185093   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:13.185404   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:13.185456   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:13.685017   31648 type.go:168] "Request Body" body=""
	I1003 18:12:13.685098   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:13.685420   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:14.185003   31648 type.go:168] "Request Body" body=""
	I1003 18:12:14.185073   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:14.185375   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:14.685353   31648 type.go:168] "Request Body" body=""
	I1003 18:12:14.685425   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:14.685732   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:15.185329   31648 type.go:168] "Request Body" body=""
	I1003 18:12:15.185393   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:15.185699   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:15.185756   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:15.685287   31648 type.go:168] "Request Body" body=""
	I1003 18:12:15.685366   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:15.685696   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:16.185545   31648 type.go:168] "Request Body" body=""
	I1003 18:12:16.185614   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:16.185938   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:16.685555   31648 type.go:168] "Request Body" body=""
	I1003 18:12:16.685672   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:16.686031   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:17.185708   31648 type.go:168] "Request Body" body=""
	I1003 18:12:17.185775   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:17.186072   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:17.186122   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:17.685745   31648 type.go:168] "Request Body" body=""
	I1003 18:12:17.685826   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:17.686169   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:18.185895   31648 type.go:168] "Request Body" body=""
	I1003 18:12:18.185966   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:18.186347   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:18.685985   31648 type.go:168] "Request Body" body=""
	I1003 18:12:18.686065   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:18.686377   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:19.185028   31648 type.go:168] "Request Body" body=""
	I1003 18:12:19.185094   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:19.185404   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:19.684993   31648 type.go:168] "Request Body" body=""
	I1003 18:12:19.685067   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:19.685365   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:19.685419   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:20.184966   31648 type.go:168] "Request Body" body=""
	I1003 18:12:20.185059   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:20.185369   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:20.684968   31648 type.go:168] "Request Body" body=""
	I1003 18:12:20.685064   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:20.685377   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:21.185199   31648 type.go:168] "Request Body" body=""
	I1003 18:12:21.185268   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:21.185584   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:21.685182   31648 type.go:168] "Request Body" body=""
	I1003 18:12:21.685270   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:21.685589   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:21.685651   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:22.185158   31648 type.go:168] "Request Body" body=""
	I1003 18:12:22.185226   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:22.185552   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:22.685092   31648 type.go:168] "Request Body" body=""
	I1003 18:12:22.685168   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:22.685483   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:23.185069   31648 type.go:168] "Request Body" body=""
	I1003 18:12:23.185132   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:23.185442   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:23.685074   31648 type.go:168] "Request Body" body=""
	I1003 18:12:23.685147   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:23.685472   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:24.185079   31648 type.go:168] "Request Body" body=""
	I1003 18:12:24.185152   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:24.185468   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:24.185523   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:24.685267   31648 type.go:168] "Request Body" body=""
	I1003 18:12:24.685328   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:24.685633   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:25.185201   31648 type.go:168] "Request Body" body=""
	I1003 18:12:25.185267   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:25.185577   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:25.685147   31648 type.go:168] "Request Body" body=""
	I1003 18:12:25.685221   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:25.685537   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:26.185376   31648 type.go:168] "Request Body" body=""
	I1003 18:12:26.185445   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:26.185763   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:26.185815   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:26.685320   31648 type.go:168] "Request Body" body=""
	I1003 18:12:26.685398   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:26.685732   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:27.185386   31648 type.go:168] "Request Body" body=""
	I1003 18:12:27.185456   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:27.185774   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:27.685332   31648 type.go:168] "Request Body" body=""
	I1003 18:12:27.685409   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:27.685755   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:28.185323   31648 type.go:168] "Request Body" body=""
	I1003 18:12:28.185387   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:28.185709   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:28.685266   31648 type.go:168] "Request Body" body=""
	I1003 18:12:28.685343   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:28.685731   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:28.685797   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:29.185293   31648 type.go:168] "Request Body" body=""
	I1003 18:12:29.185362   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:29.185681   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:29.685253   31648 type.go:168] "Request Body" body=""
	I1003 18:12:29.685341   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:29.685670   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:30.185273   31648 type.go:168] "Request Body" body=""
	I1003 18:12:30.185336   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:30.185638   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:30.685205   31648 type.go:168] "Request Body" body=""
	I1003 18:12:30.685285   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:30.685586   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:31.185396   31648 type.go:168] "Request Body" body=""
	I1003 18:12:31.185471   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:31.185833   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:31.185890   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:31.685435   31648 type.go:168] "Request Body" body=""
	I1003 18:12:31.685517   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:31.685844   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:32.185392   31648 type.go:168] "Request Body" body=""
	I1003 18:12:32.185458   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:32.185764   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:32.685377   31648 type.go:168] "Request Body" body=""
	I1003 18:12:32.685464   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:32.685795   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:33.185359   31648 type.go:168] "Request Body" body=""
	I1003 18:12:33.185426   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:33.185740   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:33.685326   31648 type.go:168] "Request Body" body=""
	I1003 18:12:33.685407   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:33.685749   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:33.685805   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:34.185324   31648 type.go:168] "Request Body" body=""
	I1003 18:12:34.185391   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:34.185798   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:34.685697   31648 type.go:168] "Request Body" body=""
	I1003 18:12:34.685778   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:34.686147   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:35.185833   31648 type.go:168] "Request Body" body=""
	I1003 18:12:35.185908   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:35.186230   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:35.685876   31648 type.go:168] "Request Body" body=""
	I1003 18:12:35.685957   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:35.686342   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:35.686404   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:36.185025   31648 type.go:168] "Request Body" body=""
	I1003 18:12:36.185106   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:36.185455   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:36.685049   31648 type.go:168] "Request Body" body=""
	I1003 18:12:36.685129   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:36.685448   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:37.185016   31648 type.go:168] "Request Body" body=""
	I1003 18:12:37.185089   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:37.185408   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:37.685018   31648 type.go:168] "Request Body" body=""
	I1003 18:12:37.685090   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:37.685418   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:38.184968   31648 type.go:168] "Request Body" body=""
	I1003 18:12:38.185058   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:38.185369   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:38.185426   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:38.684922   31648 type.go:168] "Request Body" body=""
	I1003 18:12:38.685020   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:38.685336   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:39.186015   31648 type.go:168] "Request Body" body=""
	I1003 18:12:39.186082   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:39.186391   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:39.684964   31648 type.go:168] "Request Body" body=""
	I1003 18:12:39.685064   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:39.685384   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:40.185016   31648 type.go:168] "Request Body" body=""
	I1003 18:12:40.185081   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:40.185399   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:40.185451   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:40.685023   31648 type.go:168] "Request Body" body=""
	I1003 18:12:40.685100   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:40.685415   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:41.185286   31648 type.go:168] "Request Body" body=""
	I1003 18:12:41.185356   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:41.185704   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:41.685271   31648 type.go:168] "Request Body" body=""
	I1003 18:12:41.685345   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:41.685676   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:42.185232   31648 type.go:168] "Request Body" body=""
	I1003 18:12:42.185297   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:42.185603   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:42.185677   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:42.685166   31648 type.go:168] "Request Body" body=""
	I1003 18:12:42.685261   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:42.685582   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:43.185142   31648 type.go:168] "Request Body" body=""
	I1003 18:12:43.185210   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:43.185530   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:43.685335   31648 type.go:168] "Request Body" body=""
	I1003 18:12:43.685517   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:43.686011   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:44.185546   31648 type.go:168] "Request Body" body=""
	I1003 18:12:44.185637   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:44.185952   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:44.186027   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:44.685689   31648 type.go:168] "Request Body" body=""
	I1003 18:12:44.685790   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:44.686111   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:45.185834   31648 type.go:168] "Request Body" body=""
	I1003 18:12:45.185923   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:45.186247   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:45.685720   31648 type.go:168] "Request Body" body=""
	I1003 18:12:45.685788   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:45.686128   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:46.185754   31648 type.go:168] "Request Body" body=""
	I1003 18:12:46.185839   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:46.186221   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:46.186277   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:46.685820   31648 type.go:168] "Request Body" body=""
	I1003 18:12:46.685886   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:46.686208   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:47.185851   31648 type.go:168] "Request Body" body=""
	I1003 18:12:47.185923   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:47.186245   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:47.685882   31648 type.go:168] "Request Body" body=""
	I1003 18:12:47.685947   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:47.686262   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:48.185908   31648 type.go:168] "Request Body" body=""
	I1003 18:12:48.185999   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:48.186381   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:48.186430   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:48.686002   31648 type.go:168] "Request Body" body=""
	I1003 18:12:48.686088   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:48.686447   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:49.185029   31648 type.go:168] "Request Body" body=""
	I1003 18:12:49.185102   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:49.185407   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:49.685003   31648 type.go:168] "Request Body" body=""
	I1003 18:12:49.685079   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:49.685399   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:50.184995   31648 type.go:168] "Request Body" body=""
	I1003 18:12:50.185063   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:50.185376   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:50.685005   31648 type.go:168] "Request Body" body=""
	I1003 18:12:50.685086   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:50.685402   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:50.685457   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:51.185264   31648 type.go:168] "Request Body" body=""
	I1003 18:12:51.185331   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:51.185656   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:51.685186   31648 type.go:168] "Request Body" body=""
	I1003 18:12:51.685261   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:51.685581   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:52.185171   31648 type.go:168] "Request Body" body=""
	I1003 18:12:52.185246   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:52.185567   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:52.685150   31648 type.go:168] "Request Body" body=""
	I1003 18:12:52.685238   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:52.685565   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:52.685619   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:53.185114   31648 type.go:168] "Request Body" body=""
	I1003 18:12:53.185178   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:53.185492   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:53.685072   31648 type.go:168] "Request Body" body=""
	I1003 18:12:53.685148   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:53.685473   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:54.185075   31648 type.go:168] "Request Body" body=""
	I1003 18:12:54.185146   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:54.185455   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:54.685278   31648 type.go:168] "Request Body" body=""
	I1003 18:12:54.685361   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:54.685694   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:54.685749   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:55.185253   31648 type.go:168] "Request Body" body=""
	I1003 18:12:55.185324   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:55.185627   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:55.685205   31648 type.go:168] "Request Body" body=""
	I1003 18:12:55.685291   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:55.685628   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:56.185471   31648 type.go:168] "Request Body" body=""
	I1003 18:12:56.185542   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:56.185859   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:56.685418   31648 type.go:168] "Request Body" body=""
	I1003 18:12:56.685501   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:56.685842   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:56.685903   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:57.185408   31648 type.go:168] "Request Body" body=""
	I1003 18:12:57.185483   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:57.185825   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:57.685392   31648 type.go:168] "Request Body" body=""
	I1003 18:12:57.685471   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:57.685812   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:58.185364   31648 type.go:168] "Request Body" body=""
	I1003 18:12:58.185431   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:58.185736   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:58.685296   31648 type.go:168] "Request Body" body=""
	I1003 18:12:58.685379   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:58.685735   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:59.185312   31648 type.go:168] "Request Body" body=""
	I1003 18:12:59.185381   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:59.185710   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:59.185769   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:59.685328   31648 type.go:168] "Request Body" body=""
	I1003 18:12:59.685404   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:59.685769   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:00.185320   31648 type.go:168] "Request Body" body=""
	I1003 18:13:00.185386   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:00.185713   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:00.685362   31648 type.go:168] "Request Body" body=""
	I1003 18:13:00.685457   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:00.685823   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:01.185697   31648 type.go:168] "Request Body" body=""
	I1003 18:13:01.185765   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:01.186114   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:01.186172   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:01.685762   31648 type.go:168] "Request Body" body=""
	I1003 18:13:01.685852   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:01.686240   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:02.185865   31648 type.go:168] "Request Body" body=""
	I1003 18:13:02.185951   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:02.186283   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:02.685917   31648 type.go:168] "Request Body" body=""
	I1003 18:13:02.686014   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:02.686332   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:03.185942   31648 type.go:168] "Request Body" body=""
	I1003 18:13:03.186032   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:03.186345   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:03.186397   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:03.684942   31648 type.go:168] "Request Body" body=""
	I1003 18:13:03.685055   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:03.685383   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:04.184939   31648 type.go:168] "Request Body" body=""
	I1003 18:13:04.185041   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:04.185351   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:04.685279   31648 type.go:168] "Request Body" body=""
	I1003 18:13:04.685358   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:04.685695   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:05.185233   31648 type.go:168] "Request Body" body=""
	I1003 18:13:05.185306   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:05.185608   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:05.685179   31648 type.go:168] "Request Body" body=""
	I1003 18:13:05.685255   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:05.685582   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:05.685657   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:06.185409   31648 type.go:168] "Request Body" body=""
	I1003 18:13:06.185478   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:06.185807   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:06.685397   31648 type.go:168] "Request Body" body=""
	I1003 18:13:06.685483   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:06.685824   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:07.185410   31648 type.go:168] "Request Body" body=""
	I1003 18:13:07.185478   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:07.185799   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:07.685361   31648 type.go:168] "Request Body" body=""
	I1003 18:13:07.685444   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:07.685776   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:07.685829   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:08.185354   31648 type.go:168] "Request Body" body=""
	I1003 18:13:08.185422   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:08.185738   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:08.685299   31648 type.go:168] "Request Body" body=""
	I1003 18:13:08.685380   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:08.685725   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:09.185279   31648 type.go:168] "Request Body" body=""
	I1003 18:13:09.185348   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:09.185678   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:09.685236   31648 type.go:168] "Request Body" body=""
	I1003 18:13:09.685312   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:09.685643   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:10.185169   31648 type.go:168] "Request Body" body=""
	I1003 18:13:10.185241   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:10.185552   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:10.185605   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:10.685136   31648 type.go:168] "Request Body" body=""
	I1003 18:13:10.685223   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:10.685575   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:11.185384   31648 type.go:168] "Request Body" body=""
	I1003 18:13:11.185459   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:11.185788   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:11.685352   31648 type.go:168] "Request Body" body=""
	I1003 18:13:11.685433   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:11.685753   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:12.185074   31648 type.go:168] "Request Body" body=""
	I1003 18:13:12.185141   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:12.185467   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:12.685018   31648 type.go:168] "Request Body" body=""
	I1003 18:13:12.685103   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:12.685412   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:12.685475   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:13.184997   31648 type.go:168] "Request Body" body=""
	I1003 18:13:13.185070   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:13.185403   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:13.684967   31648 type.go:168] "Request Body" body=""
	I1003 18:13:13.685061   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:13.685364   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:14.184923   31648 type.go:168] "Request Body" body=""
	I1003 18:13:14.185026   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:14.185364   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:14.685214   31648 type.go:168] "Request Body" body=""
	I1003 18:13:14.685280   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:14.685641   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:14.685714   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:15.185156   31648 type.go:168] "Request Body" body=""
	I1003 18:13:15.185255   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:15.185584   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:15.685142   31648 type.go:168] "Request Body" body=""
	I1003 18:13:15.685204   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:15.685537   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:16.185388   31648 type.go:168] "Request Body" body=""
	I1003 18:13:16.185470   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:16.185814   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:16.685411   31648 type.go:168] "Request Body" body=""
	I1003 18:13:16.685497   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:16.685863   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:16.685936   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:17.185442   31648 type.go:168] "Request Body" body=""
	I1003 18:13:17.185509   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:17.185829   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:17.685415   31648 type.go:168] "Request Body" body=""
	I1003 18:13:17.685525   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:17.685881   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:18.185495   31648 type.go:168] "Request Body" body=""
	I1003 18:13:18.185563   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:18.185876   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:18.685159   31648 type.go:168] "Request Body" body=""
	I1003 18:13:18.685230   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:18.685527   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:19.185084   31648 type.go:168] "Request Body" body=""
	I1003 18:13:19.185161   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:19.185450   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:19.185506   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:19.685103   31648 type.go:168] "Request Body" body=""
	I1003 18:13:19.685191   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:19.685616   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:20.185169   31648 type.go:168] "Request Body" body=""
	I1003 18:13:20.185250   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:20.185540   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:20.685137   31648 type.go:168] "Request Body" body=""
	I1003 18:13:20.685209   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:20.685542   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:21.185328   31648 type.go:168] "Request Body" body=""
	I1003 18:13:21.185409   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:21.185747   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:21.185800   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:21.685330   31648 type.go:168] "Request Body" body=""
	I1003 18:13:21.685393   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:21.685693   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:22.185267   31648 type.go:168] "Request Body" body=""
	I1003 18:13:22.185361   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:22.185713   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:22.685319   31648 type.go:168] "Request Body" body=""
	I1003 18:13:22.685385   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:22.685724   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:23.185388   31648 type.go:168] "Request Body" body=""
	I1003 18:13:23.185472   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:23.185812   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:23.185875   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:23.685447   31648 type.go:168] "Request Body" body=""
	I1003 18:13:23.685515   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:23.685833   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:24.185390   31648 type.go:168] "Request Body" body=""
	I1003 18:13:24.185457   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:24.185762   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:24.685669   31648 type.go:168] "Request Body" body=""
	I1003 18:13:24.685745   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:24.686090   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:25.185723   31648 type.go:168] "Request Body" body=""
	I1003 18:13:25.185792   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:25.186120   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:25.186180   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:25.685886   31648 type.go:168] "Request Body" body=""
	I1003 18:13:25.685961   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:25.686311   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:26.185007   31648 type.go:168] "Request Body" body=""
	I1003 18:13:26.185071   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:26.185380   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:26.684952   31648 type.go:168] "Request Body" body=""
	I1003 18:13:26.685041   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:26.685347   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:27.185970   31648 type.go:168] "Request Body" body=""
	I1003 18:13:27.186046   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:27.186356   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:27.186405   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:27.685041   31648 type.go:168] "Request Body" body=""
	I1003 18:13:27.685106   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:27.685416   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:28.185003   31648 type.go:168] "Request Body" body=""
	I1003 18:13:28.185070   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:28.185403   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:28.684968   31648 type.go:168] "Request Body" body=""
	I1003 18:13:28.685055   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:28.685378   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:29.184912   31648 type.go:168] "Request Body" body=""
	I1003 18:13:29.185004   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:29.185313   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:29.686012   31648 type.go:168] "Request Body" body=""
	I1003 18:13:29.686076   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:29.686383   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:29.686435   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:30.184929   31648 type.go:168] "Request Body" body=""
	I1003 18:13:30.185073   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:30.185387   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:30.684930   31648 type.go:168] "Request Body" body=""
	I1003 18:13:30.685049   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:30.685367   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:31.185212   31648 type.go:168] "Request Body" body=""
	I1003 18:13:31.185277   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:31.185571   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:31.685142   31648 type.go:168] "Request Body" body=""
	I1003 18:13:31.685208   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:31.685504   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:32.185085   31648 type.go:168] "Request Body" body=""
	I1003 18:13:32.185151   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:32.185469   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:32.185524   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:32.685051   31648 type.go:168] "Request Body" body=""
	I1003 18:13:32.685118   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:32.685424   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:33.185022   31648 type.go:168] "Request Body" body=""
	I1003 18:13:33.185092   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:33.185392   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:33.684962   31648 type.go:168] "Request Body" body=""
	I1003 18:13:33.685058   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:33.685365   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:34.184958   31648 type.go:168] "Request Body" body=""
	I1003 18:13:34.185041   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:34.185342   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:34.685149   31648 type.go:168] "Request Body" body=""
	I1003 18:13:34.685221   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:34.685506   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:34.685560   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:35.185096   31648 type.go:168] "Request Body" body=""
	I1003 18:13:35.185162   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:35.185507   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:35.685072   31648 type.go:168] "Request Body" body=""
	I1003 18:13:35.685138   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:35.685436   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:36.185249   31648 type.go:168] "Request Body" body=""
	I1003 18:13:36.185312   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:36.185619   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:36.685207   31648 type.go:168] "Request Body" body=""
	I1003 18:13:36.685270   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:36.685603   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:36.685664   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:37.185187   31648 type.go:168] "Request Body" body=""
	I1003 18:13:37.185258   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:37.185604   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:37.685170   31648 type.go:168] "Request Body" body=""
	I1003 18:13:37.685238   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:37.685540   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:38.185094   31648 type.go:168] "Request Body" body=""
	I1003 18:13:38.185165   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:38.185480   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:38.685085   31648 type.go:168] "Request Body" body=""
	I1003 18:13:38.685154   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:38.685491   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:39.185087   31648 type.go:168] "Request Body" body=""
	I1003 18:13:39.185161   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:39.185473   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:39.185530   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:39.685041   31648 type.go:168] "Request Body" body=""
	I1003 18:13:39.685104   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:39.685443   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:40.184993   31648 type.go:168] "Request Body" body=""
	I1003 18:13:40.185060   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:40.185369   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:40.684957   31648 type.go:168] "Request Body" body=""
	I1003 18:13:40.685046   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:40.685391   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:41.185256   31648 type.go:168] "Request Body" body=""
	I1003 18:13:41.185323   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:41.185632   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:41.185691   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:41.685166   31648 type.go:168] "Request Body" body=""
	I1003 18:13:41.685236   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:41.685524   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:42.185147   31648 type.go:168] "Request Body" body=""
	I1003 18:13:42.185215   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:42.185512   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:42.685072   31648 type.go:168] "Request Body" body=""
	I1003 18:13:42.685137   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:42.685438   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:43.185039   31648 type.go:168] "Request Body" body=""
	I1003 18:13:43.185104   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:43.185400   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:43.684960   31648 type.go:168] "Request Body" body=""
	I1003 18:13:43.685045   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:43.685352   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:43.685405   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:44.184941   31648 type.go:168] "Request Body" body=""
	I1003 18:13:44.185024   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:44.185317   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:44.685052   31648 type.go:168] "Request Body" body=""
	I1003 18:13:44.685120   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:44.685425   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:45.185055   31648 type.go:168] "Request Body" body=""
	I1003 18:13:45.185131   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:45.185445   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:45.685028   31648 type.go:168] "Request Body" body=""
	I1003 18:13:45.685092   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:45.685396   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:45.685450   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:46.185196   31648 type.go:168] "Request Body" body=""
	I1003 18:13:46.185259   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:46.185598   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:46.685146   31648 type.go:168] "Request Body" body=""
	I1003 18:13:46.685207   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:46.685520   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:47.185085   31648 type.go:168] "Request Body" body=""
	I1003 18:13:47.185146   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:47.185435   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:47.685023   31648 type.go:168] "Request Body" body=""
	I1003 18:13:47.685083   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:47.685387   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:48.184938   31648 type.go:168] "Request Body" body=""
	I1003 18:13:48.185024   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:48.185317   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:48.185366   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:48.685968   31648 type.go:168] "Request Body" body=""
	I1003 18:13:48.686071   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:48.686392   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:49.184927   31648 type.go:168] "Request Body" body=""
	I1003 18:13:49.185007   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:49.185301   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:49.685951   31648 type.go:168] "Request Body" body=""
	I1003 18:13:49.686058   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:49.686375   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:50.185987   31648 type.go:168] "Request Body" body=""
	I1003 18:13:50.186049   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:50.186339   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:50.186393   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:50.686008   31648 type.go:168] "Request Body" body=""
	I1003 18:13:50.686095   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:50.686413   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:51.185213   31648 type.go:168] "Request Body" body=""
	I1003 18:13:51.185281   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:51.185558   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:51.685097   31648 type.go:168] "Request Body" body=""
	I1003 18:13:51.685183   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:51.685518   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:52.185069   31648 type.go:168] "Request Body" body=""
	I1003 18:13:52.185132   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:52.185409   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:52.685038   31648 type.go:168] "Request Body" body=""
	I1003 18:13:52.685113   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:52.685416   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:52.685468   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:53.184948   31648 type.go:168] "Request Body" body=""
	I1003 18:13:53.185026   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:53.185309   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:53.685950   31648 type.go:168] "Request Body" body=""
	I1003 18:13:53.686043   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:53.686348   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:54.185948   31648 type.go:168] "Request Body" body=""
	I1003 18:13:54.186022   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:54.186302   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:54.685064   31648 type.go:168] "Request Body" body=""
	I1003 18:13:54.685138   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:54.685429   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:54.685486   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:55.185055   31648 type.go:168] "Request Body" body=""
	I1003 18:13:55.185122   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:55.185388   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:55.685066   31648 type.go:168] "Request Body" body=""
	I1003 18:13:55.685164   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:55.685462   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:56.185338   31648 type.go:168] "Request Body" body=""
	I1003 18:13:56.185406   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:56.185704   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:56.685239   31648 type.go:168] "Request Body" body=""
	I1003 18:13:56.685304   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:56.685629   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:56.685684   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:57.185240   31648 type.go:168] "Request Body" body=""
	I1003 18:13:57.185305   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:57.185635   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:57.685223   31648 type.go:168] "Request Body" body=""
	I1003 18:13:57.685287   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:57.685578   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:58.185123   31648 type.go:168] "Request Body" body=""
	I1003 18:13:58.185189   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:58.185504   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:58.685074   31648 type.go:168] "Request Body" body=""
	I1003 18:13:58.685137   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:58.685464   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:59.185038   31648 type.go:168] "Request Body" body=""
	I1003 18:13:59.185102   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:59.185391   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:59.185441   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:59.684997   31648 type.go:168] "Request Body" body=""
	I1003 18:13:59.685066   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:59.685383   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:00.184957   31648 type.go:168] "Request Body" body=""
	I1003 18:14:00.185041   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:00.185348   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:00.685990   31648 type.go:168] "Request Body" body=""
	I1003 18:14:00.686052   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:00.686352   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:01.185220   31648 type.go:168] "Request Body" body=""
	I1003 18:14:01.185292   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:01.185619   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:14:01.185673   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:14:01.685170   31648 type.go:168] "Request Body" body=""
	I1003 18:14:01.685244   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:01.685572   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:02.185133   31648 type.go:168] "Request Body" body=""
	I1003 18:14:02.185197   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:02.185506   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:02.685118   31648 type.go:168] "Request Body" body=""
	I1003 18:14:02.685184   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:02.685488   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:03.185090   31648 type.go:168] "Request Body" body=""
	I1003 18:14:03.185159   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:03.185488   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:03.685055   31648 type.go:168] "Request Body" body=""
	I1003 18:14:03.685119   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:03.685428   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:14:03.685480   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:14:04.185061   31648 type.go:168] "Request Body" body=""
	I1003 18:14:04.185131   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:04.185458   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:04.685298   31648 type.go:168] "Request Body" body=""
	I1003 18:14:04.685366   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:04.685670   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:05.185278   31648 type.go:168] "Request Body" body=""
	I1003 18:14:05.185348   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:05.185711   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:05.685243   31648 type.go:168] "Request Body" body=""
	I1003 18:14:05.685313   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:05.685621   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:14:05.685670   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:14:06.185390   31648 type.go:168] "Request Body" body=""
	I1003 18:14:06.185454   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:06.185796   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:06.685338   31648 type.go:168] "Request Body" body=""
	I1003 18:14:06.685404   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:06.685744   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:07.185312   31648 type.go:168] "Request Body" body=""
	I1003 18:14:07.185375   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:07.185694   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:07.685319   31648 type.go:168] "Request Body" body=""
	I1003 18:14:07.685388   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:07.685720   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:14:07.685775   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:14:08.185299   31648 type.go:168] "Request Body" body=""
	I1003 18:14:08.185362   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:08.185681   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:08.685362   31648 type.go:168] "Request Body" body=""
	I1003 18:14:08.685501   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:08.686040   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:09.185088   31648 type.go:168] "Request Body" body=""
	I1003 18:14:09.185166   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:09.185492   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:09.685168   31648 type.go:168] "Request Body" body=""
	I1003 18:14:09.685230   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:09.685527   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:10.185203   31648 type.go:168] "Request Body" body=""
	I1003 18:14:10.185266   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:10.185584   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:14:10.185635   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:14:10.685306   31648 type.go:168] "Request Body" body=""
	I1003 18:14:10.685367   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:10.685706   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:11.185477   31648 type.go:168] "Request Body" body=""
	I1003 18:14:11.185545   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:11.185858   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:11.685629   31648 type.go:168] "Request Body" body=""
	I1003 18:14:11.685690   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:11.686017   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:12.185788   31648 type.go:168] "Request Body" body=""
	I1003 18:14:12.185850   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:12.186194   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:14:12.186261   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:14:12.685007   31648 type.go:168] "Request Body" body=""
	I1003 18:14:12.685075   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:12.685367   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:13.185078   31648 type.go:168] "Request Body" body=""
	I1003 18:14:13.185142   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:13.185434   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:13.685146   31648 type.go:168] "Request Body" body=""
	I1003 18:14:13.685215   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:13.685514   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:14.185200   31648 type.go:168] "Request Body" body=""
	I1003 18:14:14.185264   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:14.185577   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:14.685359   31648 type.go:168] "Request Body" body=""
	W1003 18:14:14.685420   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded
	I1003 18:14:14.685433   31648 node_ready.go:38] duration metric: took 6m0.000605507s for node "functional-889240" to be "Ready" ...
	I1003 18:14:14.688030   31648 out.go:203] 
	W1003 18:14:14.689379   31648 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1003 18:14:14.689402   31648 out.go:285] * 
	* 
	W1003 18:14:14.691089   31648 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:14:14.693118   31648 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-amd64 start -p functional-889240 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m4.109098852s for "functional-889240" cluster.
I1003 18:14:15.125255   12212 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/SoftStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/SoftStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-889240
helpers_test.go:243: (dbg) docker inspect functional-889240:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a",
	        "Created": "2025-10-03T17:59:56.619817507Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 26766,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T17:59:56.652603806Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/hostname",
	        "HostsPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/hosts",
	        "LogPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a-json.log",
	        "Name": "/functional-889240",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-889240:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-889240",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a",
	                "LowerDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2-init/diff:/var/lib/docker/overlay2/6a517a7375440eba803d7b83fe1e0821915758396dd4d8556ab64fff322a60c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-889240",
	                "Source": "/var/lib/docker/volumes/functional-889240/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-889240",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-889240",
	                "name.minikube.sigs.k8s.io": "functional-889240",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "da15d31dc23bdd4694ae9e3b61015d7ce0d61668c73d3e386422834c6f0321d8",
	            "SandboxKey": "/var/run/docker/netns/da15d31dc23b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-889240": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:9e:1d:e9:d9:ce",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "03281bed183d0817c0bc237b5c25093fc10222138aedde4c7deef5823759fa24",
	                    "EndpointID": "28fa584fdd6e253816ae08a2460ef02b91085c8a7996d55008876e3bd65bbc7e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-889240",
	                        "9f4f0f10b4a9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-889240 -n functional-889240
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-889240 -n functional-889240: exit status 2 (318.232084ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/SoftStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 logs -n 25
helpers_test.go:260: TestFunctional/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-455553                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-455553   │ jenkins │ v1.37.0 │ 03 Oct 25 17:42 UTC │ 03 Oct 25 17:42 UTC │
	│ start   │ --download-only -p download-docker-423289 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-423289 │ jenkins │ v1.37.0 │ 03 Oct 25 17:42 UTC │                     │
	│ delete  │ -p download-docker-423289                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-423289 │ jenkins │ v1.37.0 │ 03 Oct 25 17:42 UTC │ 03 Oct 25 17:42 UTC │
	│ start   │ --download-only -p binary-mirror-626924 --alsologtostderr --binary-mirror http://127.0.0.1:44037 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-626924   │ jenkins │ v1.37.0 │ 03 Oct 25 17:42 UTC │                     │
	│ delete  │ -p binary-mirror-626924                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-626924   │ jenkins │ v1.37.0 │ 03 Oct 25 17:42 UTC │ 03 Oct 25 17:42 UTC │
	│ addons  │ disable dashboard -p addons-051972                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-051972          │ jenkins │ v1.37.0 │ 03 Oct 25 17:42 UTC │                     │
	│ addons  │ enable dashboard -p addons-051972                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-051972          │ jenkins │ v1.37.0 │ 03 Oct 25 17:42 UTC │                     │
	│ start   │ -p addons-051972 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-051972          │ jenkins │ v1.37.0 │ 03 Oct 25 17:42 UTC │                     │
	│ delete  │ -p addons-051972                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-051972          │ jenkins │ v1.37.0 │ 03 Oct 25 17:51 UTC │ 03 Oct 25 17:51 UTC │
	│ start   │ -p nospam-093146 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-093146 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                  │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:51 UTC │                     │
	│ start   │ nospam-093146 --log_dir /tmp/nospam-093146 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │                     │
	│ start   │ nospam-093146 --log_dir /tmp/nospam-093146 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │                     │
	│ start   │ nospam-093146 --log_dir /tmp/nospam-093146 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │                     │
	│ pause   │ nospam-093146 --log_dir /tmp/nospam-093146 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ pause   │ nospam-093146 --log_dir /tmp/nospam-093146 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ pause   │ nospam-093146 --log_dir /tmp/nospam-093146 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ unpause │ nospam-093146 --log_dir /tmp/nospam-093146 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ unpause │ nospam-093146 --log_dir /tmp/nospam-093146 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ unpause │ nospam-093146 --log_dir /tmp/nospam-093146 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ stop    │ nospam-093146 --log_dir /tmp/nospam-093146 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ stop    │ nospam-093146 --log_dir /tmp/nospam-093146 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ stop    │ nospam-093146 --log_dir /tmp/nospam-093146 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ delete  │ -p nospam-093146                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ start   │ -p functional-889240 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                            │ functional-889240      │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │                     │
	│ start   │ -p functional-889240 --alsologtostderr -v=8                                                                                                                                                                                                                                                                                                                                                                                                                              │ functional-889240      │ jenkins │ v1.37.0 │ 03 Oct 25 18:08 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:08:11
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:08:11.068231   31648 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:08:11.068486   31648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:08:11.068496   31648 out.go:374] Setting ErrFile to fd 2...
	I1003 18:08:11.068502   31648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:08:11.068729   31648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:08:11.069215   31648 out.go:368] Setting JSON to false
	I1003 18:08:11.070085   31648 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3042,"bootTime":1759511849,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:08:11.070168   31648 start.go:140] virtualization: kvm guest
	I1003 18:08:11.073397   31648 out.go:179] * [functional-889240] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 18:08:11.074567   31648 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:08:11.074571   31648 notify.go:220] Checking for updates...
	I1003 18:08:11.077123   31648 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:08:11.078380   31648 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:08:11.079542   31648 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:08:11.080665   31648 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:08:11.081754   31648 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:08:11.083246   31648 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:08:11.083337   31648 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:08:11.109195   31648 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:08:11.109276   31648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:08:11.161161   31648 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-03 18:08:11.151693527 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:08:11.161260   31648 docker.go:318] overlay module found
	I1003 18:08:11.162933   31648 out.go:179] * Using the docker driver based on existing profile
	I1003 18:08:11.164103   31648 start.go:304] selected driver: docker
	I1003 18:08:11.164115   31648 start.go:924] validating driver "docker" against &{Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:08:11.164183   31648 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:08:11.164266   31648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:08:11.217384   31648 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-03 18:08:11.207171248 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:08:11.218094   31648 cni.go:84] Creating CNI manager for ""
	I1003 18:08:11.218156   31648 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 18:08:11.218200   31648 start.go:348] cluster config:
	{Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:08:11.220110   31648 out.go:179] * Starting "functional-889240" primary control-plane node in "functional-889240" cluster
	I1003 18:08:11.221257   31648 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:08:11.222336   31648 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:08:11.223595   31648 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:08:11.223644   31648 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 18:08:11.223654   31648 cache.go:58] Caching tarball of preloaded images
	I1003 18:08:11.223686   31648 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:08:11.223758   31648 preload.go:233] Found /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1003 18:08:11.223772   31648 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 18:08:11.223859   31648 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/config.json ...
	I1003 18:08:11.242913   31648 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 18:08:11.242930   31648 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 18:08:11.242946   31648 cache.go:232] Successfully downloaded all kic artifacts
	I1003 18:08:11.242988   31648 start.go:360] acquireMachinesLock for functional-889240: {Name:mk6750a9fb1c1c3747b0abf2aebe2a2d0047ae3a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:08:11.243063   31648 start.go:364] duration metric: took 50.516µs to acquireMachinesLock for "functional-889240"
	I1003 18:08:11.243090   31648 start.go:96] Skipping create...Using existing machine configuration
	I1003 18:08:11.243097   31648 fix.go:54] fixHost starting: 
	I1003 18:08:11.243298   31648 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
	I1003 18:08:11.259925   31648 fix.go:112] recreateIfNeeded on functional-889240: state=Running err=<nil>
	W1003 18:08:11.259951   31648 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 18:08:11.261699   31648 out.go:252] * Updating the running docker "functional-889240" container ...
	I1003 18:08:11.261731   31648 machine.go:93] provisionDockerMachine start ...
	I1003 18:08:11.261806   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:11.278828   31648 main.go:141] libmachine: Using SSH client type: native
	I1003 18:08:11.279109   31648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:08:11.279121   31648 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 18:08:11.421621   31648 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-889240
	
	I1003 18:08:11.421642   31648 ubuntu.go:182] provisioning hostname "functional-889240"
	I1003 18:08:11.421693   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:11.439154   31648 main.go:141] libmachine: Using SSH client type: native
	I1003 18:08:11.439372   31648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:08:11.439384   31648 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-889240 && echo "functional-889240" | sudo tee /etc/hostname
	I1003 18:08:11.590164   31648 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-889240
	
	I1003 18:08:11.590238   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:11.607612   31648 main.go:141] libmachine: Using SSH client type: native
	I1003 18:08:11.607822   31648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:08:11.607839   31648 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-889240' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-889240/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-889240' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:08:11.750385   31648 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:08:11.750412   31648 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8669/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8669/.minikube}
	I1003 18:08:11.750443   31648 ubuntu.go:190] setting up certificates
	I1003 18:08:11.750454   31648 provision.go:84] configureAuth start
	I1003 18:08:11.750512   31648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-889240
	I1003 18:08:11.767416   31648 provision.go:143] copyHostCerts
	I1003 18:08:11.767453   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:08:11.767484   31648 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem, removing ...
	I1003 18:08:11.767498   31648 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:08:11.767564   31648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem (1082 bytes)
	I1003 18:08:11.767659   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:08:11.767679   31648 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem, removing ...
	I1003 18:08:11.767686   31648 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:08:11.767714   31648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem (1123 bytes)
	I1003 18:08:11.767934   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:08:11.768183   31648 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem, removing ...
	I1003 18:08:11.768200   31648 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:08:11.768251   31648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem (1675 bytes)
	I1003 18:08:11.768350   31648 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem org=jenkins.functional-889240 san=[127.0.0.1 192.168.49.2 functional-889240 localhost minikube]
	I1003 18:08:11.920440   31648 provision.go:177] copyRemoteCerts
	I1003 18:08:11.920514   31648 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:08:11.920551   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:11.938061   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:12.037875   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 18:08:12.037937   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1003 18:08:12.054720   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 18:08:12.054773   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 18:08:12.071055   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 18:08:12.071110   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 18:08:12.087547   31648 provision.go:87] duration metric: took 337.079976ms to configureAuth
	I1003 18:08:12.087574   31648 ubuntu.go:206] setting minikube options for container-runtime
	I1003 18:08:12.087766   31648 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:08:12.087867   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:12.105048   31648 main.go:141] libmachine: Using SSH client type: native
	I1003 18:08:12.105289   31648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:08:12.105305   31648 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 18:08:12.366340   31648 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 18:08:12.366367   31648 machine.go:96] duration metric: took 1.104629442s to provisionDockerMachine
	I1003 18:08:12.366377   31648 start.go:293] postStartSetup for "functional-889240" (driver="docker")
	I1003 18:08:12.366388   31648 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:08:12.366431   31648 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:08:12.366476   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:12.383468   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:12.483988   31648 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:08:12.487264   31648 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1003 18:08:12.487282   31648 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1003 18:08:12.487289   31648 command_runner.go:130] > VERSION_ID="12"
	I1003 18:08:12.487295   31648 command_runner.go:130] > VERSION="12 (bookworm)"
	I1003 18:08:12.487301   31648 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1003 18:08:12.487306   31648 command_runner.go:130] > ID=debian
	I1003 18:08:12.487313   31648 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1003 18:08:12.487320   31648 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1003 18:08:12.487329   31648 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1003 18:08:12.487402   31648 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:08:12.487425   31648 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 18:08:12.487438   31648 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/addons for local assets ...
	I1003 18:08:12.487491   31648 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/files for local assets ...
	I1003 18:08:12.487581   31648 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> 122122.pem in /etc/ssl/certs
	I1003 18:08:12.487593   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /etc/ssl/certs/122122.pem
	I1003 18:08:12.487688   31648 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/test/nested/copy/12212/hosts -> hosts in /etc/test/nested/copy/12212
	I1003 18:08:12.487697   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/test/nested/copy/12212/hosts -> /etc/test/nested/copy/12212/hosts
	I1003 18:08:12.487740   31648 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/12212
	I1003 18:08:12.495127   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:08:12.511597   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/test/nested/copy/12212/hosts --> /etc/test/nested/copy/12212/hosts (40 bytes)
	I1003 18:08:12.528571   31648 start.go:296] duration metric: took 162.180752ms for postStartSetup
	I1003 18:08:12.528647   31648 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:08:12.528710   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:12.546258   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:12.643641   31648 command_runner.go:130] > 39%
	I1003 18:08:12.643858   31648 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:08:12.648017   31648 command_runner.go:130] > 179G
	I1003 18:08:12.648284   31648 fix.go:56] duration metric: took 1.405183874s for fixHost
	I1003 18:08:12.648303   31648 start.go:83] releasing machines lock for "functional-889240", held for 1.405223544s
	I1003 18:08:12.648364   31648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-889240
	I1003 18:08:12.665548   31648 ssh_runner.go:195] Run: cat /version.json
	I1003 18:08:12.665589   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:12.665627   31648 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:08:12.665684   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:12.683771   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:12.684037   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:12.833728   31648 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1003 18:08:12.833784   31648 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759382731-21643", "minikube_version": "v1.37.0", "commit": "b0c70dd4d342e6443a02916e52d246d8cdb181c4"}
	I1003 18:08:12.833903   31648 ssh_runner.go:195] Run: systemctl --version
	I1003 18:08:12.840008   31648 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1003 18:08:12.840056   31648 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1003 18:08:12.840282   31648 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 18:08:12.874135   31648 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1003 18:08:12.878285   31648 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1003 18:08:12.878575   31648 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 18:08:12.878637   31648 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 18:08:12.886227   31648 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1003 18:08:12.886250   31648 start.go:495] detecting cgroup driver to use...
	I1003 18:08:12.886282   31648 detect.go:190] detected "systemd" cgroup driver on host os
	I1003 18:08:12.886327   31648 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 18:08:12.900106   31648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 18:08:12.911429   31648 docker.go:218] disabling cri-docker service (if available) ...
	I1003 18:08:12.911477   31648 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 18:08:12.925289   31648 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 18:08:12.936739   31648 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 18:08:13.020667   31648 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 18:08:13.102263   31648 docker.go:234] disabling docker service ...
	I1003 18:08:13.102328   31648 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 18:08:13.115759   31648 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 18:08:13.127581   31648 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 18:08:13.208801   31648 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 18:08:13.298232   31648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 18:08:13.314511   31648 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:08:13.327949   31648 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1003 18:08:13.328859   31648 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 18:08:13.328914   31648 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.337658   31648 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 18:08:13.337709   31648 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.346162   31648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.354712   31648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.363098   31648 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:08:13.370793   31648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.378940   31648 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.386700   31648 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.394938   31648 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:08:13.401467   31648 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1003 18:08:13.402164   31648 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:08:13.409040   31648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:08:13.496423   31648 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 18:08:13.599891   31648 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 18:08:13.599956   31648 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 18:08:13.603739   31648 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1003 18:08:13.603760   31648 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1003 18:08:13.603769   31648 command_runner.go:130] > Device: 0,59	Inode: 3868        Links: 1
	I1003 18:08:13.603779   31648 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1003 18:08:13.603787   31648 command_runner.go:130] > Access: 2025-10-03 18:08:13.582699245 +0000
	I1003 18:08:13.603796   31648 command_runner.go:130] > Modify: 2025-10-03 18:08:13.582699245 +0000
	I1003 18:08:13.603806   31648 command_runner.go:130] > Change: 2025-10-03 18:08:13.582699245 +0000
	I1003 18:08:13.603811   31648 command_runner.go:130] >  Birth: 2025-10-03 18:08:13.582699245 +0000
	I1003 18:08:13.603837   31648 start.go:563] Will wait 60s for crictl version
	I1003 18:08:13.603884   31648 ssh_runner.go:195] Run: which crictl
	I1003 18:08:13.607403   31648 command_runner.go:130] > /usr/local/bin/crictl
	I1003 18:08:13.607458   31648 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 18:08:13.630641   31648 command_runner.go:130] > Version:  0.1.0
	I1003 18:08:13.630667   31648 command_runner.go:130] > RuntimeName:  cri-o
	I1003 18:08:13.630673   31648 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1003 18:08:13.630680   31648 command_runner.go:130] > RuntimeApiVersion:  v1
	I1003 18:08:13.630699   31648 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 18:08:13.630764   31648 ssh_runner.go:195] Run: crio --version
	I1003 18:08:13.656303   31648 command_runner.go:130] > crio version 1.34.1
	I1003 18:08:13.656324   31648 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1003 18:08:13.656329   31648 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1003 18:08:13.656339   31648 command_runner.go:130] >    GitTreeState:   dirty
	I1003 18:08:13.656344   31648 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1003 18:08:13.656348   31648 command_runner.go:130] >    GoVersion:      go1.24.6
	I1003 18:08:13.656352   31648 command_runner.go:130] >    Compiler:       gc
	I1003 18:08:13.656365   31648 command_runner.go:130] >    Platform:       linux/amd64
	I1003 18:08:13.656372   31648 command_runner.go:130] >    Linkmode:       static
	I1003 18:08:13.656378   31648 command_runner.go:130] >    BuildTags:
	I1003 18:08:13.656383   31648 command_runner.go:130] >      static
	I1003 18:08:13.656387   31648 command_runner.go:130] >      netgo
	I1003 18:08:13.656393   31648 command_runner.go:130] >      osusergo
	I1003 18:08:13.656396   31648 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1003 18:08:13.656402   31648 command_runner.go:130] >      seccomp
	I1003 18:08:13.656405   31648 command_runner.go:130] >      apparmor
	I1003 18:08:13.656410   31648 command_runner.go:130] >      selinux
	I1003 18:08:13.656415   31648 command_runner.go:130] >    LDFlags:          unknown
	I1003 18:08:13.656421   31648 command_runner.go:130] >    SeccompEnabled:   true
	I1003 18:08:13.656426   31648 command_runner.go:130] >    AppArmorEnabled:  false
	I1003 18:08:13.657588   31648 ssh_runner.go:195] Run: crio --version
	I1003 18:08:13.682656   31648 command_runner.go:130] > crio version 1.34.1
	I1003 18:08:13.682693   31648 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1003 18:08:13.682698   31648 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1003 18:08:13.682703   31648 command_runner.go:130] >    GitTreeState:   dirty
	I1003 18:08:13.682708   31648 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1003 18:08:13.682712   31648 command_runner.go:130] >    GoVersion:      go1.24.6
	I1003 18:08:13.682716   31648 command_runner.go:130] >    Compiler:       gc
	I1003 18:08:13.682720   31648 command_runner.go:130] >    Platform:       linux/amd64
	I1003 18:08:13.682724   31648 command_runner.go:130] >    Linkmode:       static
	I1003 18:08:13.682728   31648 command_runner.go:130] >    BuildTags:
	I1003 18:08:13.682733   31648 command_runner.go:130] >      static
	I1003 18:08:13.682737   31648 command_runner.go:130] >      netgo
	I1003 18:08:13.682741   31648 command_runner.go:130] >      osusergo
	I1003 18:08:13.682746   31648 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1003 18:08:13.682753   31648 command_runner.go:130] >      seccomp
	I1003 18:08:13.682756   31648 command_runner.go:130] >      apparmor
	I1003 18:08:13.682759   31648 command_runner.go:130] >      selinux
	I1003 18:08:13.682763   31648 command_runner.go:130] >    LDFlags:          unknown
	I1003 18:08:13.682770   31648 command_runner.go:130] >    SeccompEnabled:   true
	I1003 18:08:13.682774   31648 command_runner.go:130] >    AppArmorEnabled:  false
	I1003 18:08:13.685817   31648 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 18:08:13.686852   31648 cli_runner.go:164] Run: docker network inspect functional-889240 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:08:13.703291   31648 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 18:08:13.707207   31648 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1003 18:08:13.707295   31648 kubeadm.go:883] updating cluster {Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 18:08:13.707417   31648 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:08:13.707473   31648 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:08:13.737725   31648 command_runner.go:130] > {
	I1003 18:08:13.737745   31648 command_runner.go:130] >   "images":  [
	I1003 18:08:13.737749   31648 command_runner.go:130] >     {
	I1003 18:08:13.737755   31648 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1003 18:08:13.737763   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.737773   31648 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1003 18:08:13.737780   31648 command_runner.go:130] >       ],
	I1003 18:08:13.737786   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.737798   31648 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1003 18:08:13.737807   31648 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1003 18:08:13.737811   31648 command_runner.go:130] >       ],
	I1003 18:08:13.737815   31648 command_runner.go:130] >       "size":  "109379124",
	I1003 18:08:13.737819   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.737828   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.737832   31648 command_runner.go:130] >     },
	I1003 18:08:13.737835   31648 command_runner.go:130] >     {
	I1003 18:08:13.737841   31648 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1003 18:08:13.737848   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.737859   31648 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1003 18:08:13.737868   31648 command_runner.go:130] >       ],
	I1003 18:08:13.737875   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.737886   31648 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1003 18:08:13.737898   31648 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1003 18:08:13.737904   31648 command_runner.go:130] >       ],
	I1003 18:08:13.737908   31648 command_runner.go:130] >       "size":  "31470524",
	I1003 18:08:13.737914   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.737920   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.737931   31648 command_runner.go:130] >     },
	I1003 18:08:13.737939   31648 command_runner.go:130] >     {
	I1003 18:08:13.737948   31648 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1003 18:08:13.737958   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.737969   31648 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1003 18:08:13.737987   31648 command_runner.go:130] >       ],
	I1003 18:08:13.737995   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738007   31648 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1003 18:08:13.738023   31648 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1003 18:08:13.738031   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738037   31648 command_runner.go:130] >       "size":  "76103547",
	I1003 18:08:13.738045   31648 command_runner.go:130] >       "username":  "nonroot",
	I1003 18:08:13.738049   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.738054   31648 command_runner.go:130] >     },
	I1003 18:08:13.738058   31648 command_runner.go:130] >     {
	I1003 18:08:13.738070   31648 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1003 18:08:13.738081   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.738091   31648 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1003 18:08:13.738100   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738110   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738124   31648 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1003 18:08:13.738137   31648 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1003 18:08:13.738143   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738148   31648 command_runner.go:130] >       "size":  "195976448",
	I1003 18:08:13.738155   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.738165   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.738175   31648 command_runner.go:130] >       },
	I1003 18:08:13.738187   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.738197   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.738205   31648 command_runner.go:130] >     },
	I1003 18:08:13.738212   31648 command_runner.go:130] >     {
	I1003 18:08:13.738223   31648 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1003 18:08:13.738230   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.738236   31648 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1003 18:08:13.738245   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738256   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738270   31648 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1003 18:08:13.738285   31648 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1003 18:08:13.738293   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738301   31648 command_runner.go:130] >       "size":  "89046001",
	I1003 18:08:13.738308   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.738312   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.738315   31648 command_runner.go:130] >       },
	I1003 18:08:13.738320   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.738329   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.738338   31648 command_runner.go:130] >     },
	I1003 18:08:13.738344   31648 command_runner.go:130] >     {
	I1003 18:08:13.738357   31648 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1003 18:08:13.738366   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.738377   31648 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1003 18:08:13.738386   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738395   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738402   31648 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1003 18:08:13.738418   31648 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1003 18:08:13.738427   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738434   31648 command_runner.go:130] >       "size":  "76004181",
	I1003 18:08:13.738443   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.738453   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.738460   31648 command_runner.go:130] >       },
	I1003 18:08:13.738467   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.738475   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.738480   31648 command_runner.go:130] >     },
	I1003 18:08:13.738484   31648 command_runner.go:130] >     {
	I1003 18:08:13.738493   31648 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1003 18:08:13.738502   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.738514   31648 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1003 18:08:13.738522   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738531   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738545   31648 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1003 18:08:13.738560   31648 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1003 18:08:13.738568   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738572   31648 command_runner.go:130] >       "size":  "73138073",
	I1003 18:08:13.738580   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.738586   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.738595   31648 command_runner.go:130] >     },
	I1003 18:08:13.738605   31648 command_runner.go:130] >     {
	I1003 18:08:13.738617   31648 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1003 18:08:13.738625   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.738634   31648 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1003 18:08:13.738642   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738648   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738658   31648 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1003 18:08:13.738674   31648 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1003 18:08:13.738683   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738693   31648 command_runner.go:130] >       "size":  "53844823",
	I1003 18:08:13.738702   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.738710   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.738718   31648 command_runner.go:130] >       },
	I1003 18:08:13.738724   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.738733   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.738743   31648 command_runner.go:130] >     },
	I1003 18:08:13.738747   31648 command_runner.go:130] >     {
	I1003 18:08:13.738756   31648 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1003 18:08:13.738766   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.738777   31648 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1003 18:08:13.738785   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738792   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738806   31648 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1003 18:08:13.738819   31648 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1003 18:08:13.738827   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738832   31648 command_runner.go:130] >       "size":  "742092",
	I1003 18:08:13.738838   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.738843   31648 command_runner.go:130] >         "value":  "65535"
	I1003 18:08:13.738851   31648 command_runner.go:130] >       },
	I1003 18:08:13.738862   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.738871   31648 command_runner.go:130] >       "pinned":  true
	I1003 18:08:13.738885   31648 command_runner.go:130] >     }
	I1003 18:08:13.738890   31648 command_runner.go:130] >   ]
	I1003 18:08:13.738898   31648 command_runner.go:130] > }
	I1003 18:08:13.739109   31648 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:08:13.739126   31648 crio.go:433] Images already preloaded, skipping extraction
	I1003 18:08:13.739173   31648 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:08:13.761526   31648 command_runner.go:130] > {
	I1003 18:08:13.761550   31648 command_runner.go:130] >   "images":  [
	I1003 18:08:13.761558   31648 command_runner.go:130] >     {
	I1003 18:08:13.761569   31648 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1003 18:08:13.761577   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.761586   31648 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1003 18:08:13.761592   31648 command_runner.go:130] >       ],
	I1003 18:08:13.761599   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.761616   31648 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1003 18:08:13.761631   31648 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1003 18:08:13.761639   31648 command_runner.go:130] >       ],
	I1003 18:08:13.761646   31648 command_runner.go:130] >       "size":  "109379124",
	I1003 18:08:13.761659   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.761672   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.761681   31648 command_runner.go:130] >     },
	I1003 18:08:13.761686   31648 command_runner.go:130] >     {
	I1003 18:08:13.761698   31648 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1003 18:08:13.761708   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.761719   31648 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1003 18:08:13.761728   31648 command_runner.go:130] >       ],
	I1003 18:08:13.761737   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.761753   31648 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1003 18:08:13.761770   31648 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1003 18:08:13.761779   31648 command_runner.go:130] >       ],
	I1003 18:08:13.761789   31648 command_runner.go:130] >       "size":  "31470524",
	I1003 18:08:13.761799   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.761810   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.761818   31648 command_runner.go:130] >     },
	I1003 18:08:13.761823   31648 command_runner.go:130] >     {
	I1003 18:08:13.761836   31648 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1003 18:08:13.761845   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.761852   31648 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1003 18:08:13.761860   31648 command_runner.go:130] >       ],
	I1003 18:08:13.761866   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.761879   31648 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1003 18:08:13.761889   31648 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1003 18:08:13.761897   31648 command_runner.go:130] >       ],
	I1003 18:08:13.761903   31648 command_runner.go:130] >       "size":  "76103547",
	I1003 18:08:13.761913   31648 command_runner.go:130] >       "username":  "nonroot",
	I1003 18:08:13.761922   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.761934   31648 command_runner.go:130] >     },
	I1003 18:08:13.761942   31648 command_runner.go:130] >     {
	I1003 18:08:13.761952   31648 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1003 18:08:13.761960   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.761970   31648 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1003 18:08:13.762000   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762008   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.762019   31648 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1003 18:08:13.762032   31648 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1003 18:08:13.762041   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762051   31648 command_runner.go:130] >       "size":  "195976448",
	I1003 18:08:13.762060   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.762068   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.762074   31648 command_runner.go:130] >       },
	I1003 18:08:13.762087   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.762097   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.762101   31648 command_runner.go:130] >     },
	I1003 18:08:13.762109   31648 command_runner.go:130] >     {
	I1003 18:08:13.762117   31648 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1003 18:08:13.762126   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.762135   31648 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1003 18:08:13.762143   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762149   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.762163   31648 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1003 18:08:13.762178   31648 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1003 18:08:13.762186   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762193   31648 command_runner.go:130] >       "size":  "89046001",
	I1003 18:08:13.762202   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.762212   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.762221   31648 command_runner.go:130] >       },
	I1003 18:08:13.762229   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.762239   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.762248   31648 command_runner.go:130] >     },
	I1003 18:08:13.762256   31648 command_runner.go:130] >     {
	I1003 18:08:13.762265   31648 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1003 18:08:13.762275   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.762284   31648 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1003 18:08:13.762292   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762303   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.762319   31648 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1003 18:08:13.762335   31648 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1003 18:08:13.762343   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762353   31648 command_runner.go:130] >       "size":  "76004181",
	I1003 18:08:13.762361   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.762367   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.762374   31648 command_runner.go:130] >       },
	I1003 18:08:13.762380   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.762388   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.762392   31648 command_runner.go:130] >     },
	I1003 18:08:13.762401   31648 command_runner.go:130] >     {
	I1003 18:08:13.762412   31648 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1003 18:08:13.762422   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.762431   31648 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1003 18:08:13.762438   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762444   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.762456   31648 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1003 18:08:13.762468   31648 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1003 18:08:13.762477   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762487   31648 command_runner.go:130] >       "size":  "73138073",
	I1003 18:08:13.762497   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.762506   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.762515   31648 command_runner.go:130] >     },
	I1003 18:08:13.762523   31648 command_runner.go:130] >     {
	I1003 18:08:13.762533   31648 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1003 18:08:13.762539   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.762547   31648 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1003 18:08:13.762552   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762559   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.762570   31648 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1003 18:08:13.762593   31648 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1003 18:08:13.762602   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762608   31648 command_runner.go:130] >       "size":  "53844823",
	I1003 18:08:13.762616   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.762623   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.762630   31648 command_runner.go:130] >       },
	I1003 18:08:13.762636   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.762645   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.762653   31648 command_runner.go:130] >     },
	I1003 18:08:13.762657   31648 command_runner.go:130] >     {
	I1003 18:08:13.762665   31648 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1003 18:08:13.762671   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.762681   31648 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1003 18:08:13.762686   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762695   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.762706   31648 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1003 18:08:13.762720   31648 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1003 18:08:13.762728   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762732   31648 command_runner.go:130] >       "size":  "742092",
	I1003 18:08:13.762737   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.762742   31648 command_runner.go:130] >         "value":  "65535"
	I1003 18:08:13.762747   31648 command_runner.go:130] >       },
	I1003 18:08:13.762751   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.762757   31648 command_runner.go:130] >       "pinned":  true
	I1003 18:08:13.762761   31648 command_runner.go:130] >     }
	I1003 18:08:13.762766   31648 command_runner.go:130] >   ]
	I1003 18:08:13.762769   31648 command_runner.go:130] > }
	I1003 18:08:13.763568   31648 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:08:13.763587   31648 cache_images.go:85] Images are preloaded, skipping loading
	I1003 18:08:13.763596   31648 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1003 18:08:13.763703   31648 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-889240 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 18:08:13.763779   31648 ssh_runner.go:195] Run: crio config
	I1003 18:08:13.802487   31648 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1003 18:08:13.802512   31648 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1003 18:08:13.802523   31648 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1003 18:08:13.802528   31648 command_runner.go:130] > #
	I1003 18:08:13.802538   31648 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1003 18:08:13.802546   31648 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1003 18:08:13.802555   31648 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1003 18:08:13.802566   31648 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1003 18:08:13.802572   31648 command_runner.go:130] > # reload'.
	I1003 18:08:13.802583   31648 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1003 18:08:13.802595   31648 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1003 18:08:13.802606   31648 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1003 18:08:13.802615   31648 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1003 18:08:13.802622   31648 command_runner.go:130] > [crio]
	I1003 18:08:13.802632   31648 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1003 18:08:13.802640   31648 command_runner.go:130] > # containers images, in this directory.
	I1003 18:08:13.802653   31648 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1003 18:08:13.802671   31648 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1003 18:08:13.802680   31648 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1003 18:08:13.802693   31648 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1003 18:08:13.802704   31648 command_runner.go:130] > # imagestore = ""
	I1003 18:08:13.802714   31648 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1003 18:08:13.802726   31648 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1003 18:08:13.802736   31648 command_runner.go:130] > # storage_driver = "overlay"
	I1003 18:08:13.802747   31648 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1003 18:08:13.802761   31648 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1003 18:08:13.802770   31648 command_runner.go:130] > # storage_option = [
	I1003 18:08:13.802777   31648 command_runner.go:130] > # ]
	I1003 18:08:13.802788   31648 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1003 18:08:13.802800   31648 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1003 18:08:13.802808   31648 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1003 18:08:13.802820   31648 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1003 18:08:13.802830   31648 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1003 18:08:13.802835   31648 command_runner.go:130] > # always happen on a node reboot
	I1003 18:08:13.802840   31648 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1003 18:08:13.802849   31648 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1003 18:08:13.802860   31648 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1003 18:08:13.802865   31648 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1003 18:08:13.802871   31648 command_runner.go:130] > # version_file_persist = ""
	I1003 18:08:13.802882   31648 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1003 18:08:13.802899   31648 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1003 18:08:13.802906   31648 command_runner.go:130] > # internal_wipe = true
	I1003 18:08:13.802917   31648 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1003 18:08:13.802929   31648 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1003 18:08:13.802935   31648 command_runner.go:130] > # internal_repair = true
	I1003 18:08:13.802943   31648 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1003 18:08:13.802953   31648 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1003 18:08:13.802966   31648 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1003 18:08:13.802985   31648 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1003 18:08:13.802996   31648 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1003 18:08:13.803006   31648 command_runner.go:130] > [crio.api]
	I1003 18:08:13.803015   31648 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1003 18:08:13.803025   31648 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1003 18:08:13.803033   31648 command_runner.go:130] > # IP address on which the stream server will listen.
	I1003 18:08:13.803043   31648 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1003 18:08:13.803054   31648 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1003 18:08:13.803065   31648 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1003 18:08:13.803072   31648 command_runner.go:130] > # stream_port = "0"
	I1003 18:08:13.803083   31648 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1003 18:08:13.803090   31648 command_runner.go:130] > # stream_enable_tls = false
	I1003 18:08:13.803102   31648 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1003 18:08:13.803114   31648 command_runner.go:130] > # stream_idle_timeout = ""
	I1003 18:08:13.803124   31648 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1003 18:08:13.803136   31648 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1003 18:08:13.803146   31648 command_runner.go:130] > # stream_tls_cert = ""
	I1003 18:08:13.803156   31648 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1003 18:08:13.803166   31648 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1003 18:08:13.803175   31648 command_runner.go:130] > # stream_tls_key = ""
	I1003 18:08:13.803185   31648 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1003 18:08:13.803197   31648 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1003 18:08:13.803202   31648 command_runner.go:130] > # automatically pick up the changes.
	I1003 18:08:13.803207   31648 command_runner.go:130] > # stream_tls_ca = ""
	I1003 18:08:13.803271   31648 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1003 18:08:13.803286   31648 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1003 18:08:13.803296   31648 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1003 18:08:13.803308   31648 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1003 18:08:13.803318   31648 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1003 18:08:13.803331   31648 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1003 18:08:13.803338   31648 command_runner.go:130] > [crio.runtime]
	I1003 18:08:13.803350   31648 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1003 18:08:13.803358   31648 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1003 18:08:13.803367   31648 command_runner.go:130] > # "nofile=1024:2048"
	I1003 18:08:13.803378   31648 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1003 18:08:13.803388   31648 command_runner.go:130] > # default_ulimits = [
	I1003 18:08:13.803393   31648 command_runner.go:130] > # ]
	I1003 18:08:13.803403   31648 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1003 18:08:13.803409   31648 command_runner.go:130] > # no_pivot = false
	I1003 18:08:13.803422   31648 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1003 18:08:13.803432   31648 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1003 18:08:13.803444   31648 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1003 18:08:13.803455   31648 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1003 18:08:13.803462   31648 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1003 18:08:13.803473   31648 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1003 18:08:13.803482   31648 command_runner.go:130] > # conmon = ""
	I1003 18:08:13.803489   31648 command_runner.go:130] > # Cgroup setting for conmon
	I1003 18:08:13.803504   31648 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1003 18:08:13.803513   31648 command_runner.go:130] > conmon_cgroup = "pod"
	I1003 18:08:13.803523   31648 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1003 18:08:13.803534   31648 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1003 18:08:13.803545   31648 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1003 18:08:13.803554   31648 command_runner.go:130] > # conmon_env = [
	I1003 18:08:13.803560   31648 command_runner.go:130] > # ]
	I1003 18:08:13.803573   31648 command_runner.go:130] > # Additional environment variables to set for all the
	I1003 18:08:13.803583   31648 command_runner.go:130] > # containers. These are overridden if set in the
	I1003 18:08:13.803595   31648 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1003 18:08:13.803603   31648 command_runner.go:130] > # default_env = [
	I1003 18:08:13.803611   31648 command_runner.go:130] > # ]
	I1003 18:08:13.803620   31648 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1003 18:08:13.803635   31648 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1003 18:08:13.803644   31648 command_runner.go:130] > # selinux = false
	I1003 18:08:13.803657   31648 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1003 18:08:13.803681   31648 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1003 18:08:13.803693   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.803703   31648 command_runner.go:130] > # seccomp_profile = ""
	I1003 18:08:13.803714   31648 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1003 18:08:13.803725   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.803735   31648 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1003 18:08:13.803746   31648 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1003 18:08:13.803760   31648 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1003 18:08:13.803772   31648 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1003 18:08:13.803785   31648 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1003 18:08:13.803796   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.803803   31648 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1003 18:08:13.803817   31648 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1003 18:08:13.803827   31648 command_runner.go:130] > # the cgroup blockio controller.
	I1003 18:08:13.803833   31648 command_runner.go:130] > # blockio_config_file = ""
	I1003 18:08:13.803847   31648 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1003 18:08:13.803856   31648 command_runner.go:130] > # blockio parameters.
	I1003 18:08:13.803862   31648 command_runner.go:130] > # blockio_reload = false
	I1003 18:08:13.803869   31648 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1003 18:08:13.803877   31648 command_runner.go:130] > # irqbalance daemon.
	I1003 18:08:13.803883   31648 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1003 18:08:13.803890   31648 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1003 18:08:13.803906   31648 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1003 18:08:13.803916   31648 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1003 18:08:13.803925   31648 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1003 18:08:13.803933   31648 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1003 18:08:13.803939   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.803951   31648 command_runner.go:130] > # rdt_config_file = ""
	I1003 18:08:13.803958   31648 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1003 18:08:13.803970   31648 command_runner.go:130] > # cgroup_manager = "systemd"
	I1003 18:08:13.803987   31648 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1003 18:08:13.803998   31648 command_runner.go:130] > # separate_pull_cgroup = ""
	I1003 18:08:13.804008   31648 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1003 18:08:13.804017   31648 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1003 18:08:13.804026   31648 command_runner.go:130] > # will be added.
	I1003 18:08:13.804035   31648 command_runner.go:130] > # default_capabilities = [
	I1003 18:08:13.804043   31648 command_runner.go:130] > # 	"CHOWN",
	I1003 18:08:13.804050   31648 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1003 18:08:13.804055   31648 command_runner.go:130] > # 	"FSETID",
	I1003 18:08:13.804066   31648 command_runner.go:130] > # 	"FOWNER",
	I1003 18:08:13.804071   31648 command_runner.go:130] > # 	"SETGID",
	I1003 18:08:13.804087   31648 command_runner.go:130] > # 	"SETUID",
	I1003 18:08:13.804093   31648 command_runner.go:130] > # 	"SETPCAP",
	I1003 18:08:13.804097   31648 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1003 18:08:13.804102   31648 command_runner.go:130] > # 	"KILL",
	I1003 18:08:13.804105   31648 command_runner.go:130] > # ]
	I1003 18:08:13.804112   31648 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1003 18:08:13.804121   31648 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1003 18:08:13.804125   31648 command_runner.go:130] > # add_inheritable_capabilities = false
	I1003 18:08:13.804133   31648 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1003 18:08:13.804138   31648 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1003 18:08:13.804143   31648 command_runner.go:130] > default_sysctls = [
	I1003 18:08:13.804147   31648 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1003 18:08:13.804150   31648 command_runner.go:130] > ]
	I1003 18:08:13.804157   31648 command_runner.go:130] > # List of devices on the host that a
	I1003 18:08:13.804163   31648 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1003 18:08:13.804169   31648 command_runner.go:130] > # allowed_devices = [
	I1003 18:08:13.804173   31648 command_runner.go:130] > # 	"/dev/fuse",
	I1003 18:08:13.804178   31648 command_runner.go:130] > # 	"/dev/net/tun",
	I1003 18:08:13.804181   31648 command_runner.go:130] > # ]
	I1003 18:08:13.804188   31648 command_runner.go:130] > # List of additional devices. specified as
	I1003 18:08:13.804194   31648 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1003 18:08:13.804201   31648 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1003 18:08:13.804207   31648 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1003 18:08:13.804212   31648 command_runner.go:130] > # additional_devices = [
	I1003 18:08:13.804215   31648 command_runner.go:130] > # ]
	I1003 18:08:13.804222   31648 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1003 18:08:13.804226   31648 command_runner.go:130] > # cdi_spec_dirs = [
	I1003 18:08:13.804231   31648 command_runner.go:130] > # 	"/etc/cdi",
	I1003 18:08:13.804235   31648 command_runner.go:130] > # 	"/var/run/cdi",
	I1003 18:08:13.804237   31648 command_runner.go:130] > # ]
	I1003 18:08:13.804243   31648 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1003 18:08:13.804251   31648 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1003 18:08:13.804254   31648 command_runner.go:130] > # Defaults to false.
	I1003 18:08:13.804261   31648 command_runner.go:130] > # device_ownership_from_security_context = false
	I1003 18:08:13.804268   31648 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1003 18:08:13.804275   31648 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1003 18:08:13.804279   31648 command_runner.go:130] > # hooks_dir = [
	I1003 18:08:13.804286   31648 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1003 18:08:13.804290   31648 command_runner.go:130] > # ]
	I1003 18:08:13.804297   31648 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1003 18:08:13.804303   31648 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1003 18:08:13.804309   31648 command_runner.go:130] > # its default mounts from the following two files:
	I1003 18:08:13.804312   31648 command_runner.go:130] > #
	I1003 18:08:13.804320   31648 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1003 18:08:13.804326   31648 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1003 18:08:13.804333   31648 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1003 18:08:13.804336   31648 command_runner.go:130] > #
	I1003 18:08:13.804342   31648 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1003 18:08:13.804349   31648 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1003 18:08:13.804356   31648 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1003 18:08:13.804363   31648 command_runner.go:130] > #      only add mounts it finds in this file.
	I1003 18:08:13.804366   31648 command_runner.go:130] > #
	I1003 18:08:13.804372   31648 command_runner.go:130] > # default_mounts_file = ""
	I1003 18:08:13.804376   31648 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1003 18:08:13.804384   31648 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1003 18:08:13.804388   31648 command_runner.go:130] > # pids_limit = -1
	I1003 18:08:13.804396   31648 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1003 18:08:13.804401   31648 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1003 18:08:13.804409   31648 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1003 18:08:13.804417   31648 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1003 18:08:13.804422   31648 command_runner.go:130] > # log_size_max = -1
	I1003 18:08:13.804429   31648 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1003 18:08:13.804435   31648 command_runner.go:130] > # log_to_journald = false
	I1003 18:08:13.804441   31648 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1003 18:08:13.804447   31648 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1003 18:08:13.804451   31648 command_runner.go:130] > # Path to directory for container attach sockets.
	I1003 18:08:13.804458   31648 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1003 18:08:13.804463   31648 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1003 18:08:13.804469   31648 command_runner.go:130] > # bind_mount_prefix = ""
	I1003 18:08:13.804473   31648 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1003 18:08:13.804479   31648 command_runner.go:130] > # read_only = false
	I1003 18:08:13.804486   31648 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1003 18:08:13.804494   31648 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1003 18:08:13.804497   31648 command_runner.go:130] > # live configuration reload.
	I1003 18:08:13.804501   31648 command_runner.go:130] > # log_level = "info"
	I1003 18:08:13.804508   31648 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1003 18:08:13.804513   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.804519   31648 command_runner.go:130] > # log_filter = ""
	I1003 18:08:13.804524   31648 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1003 18:08:13.804532   31648 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1003 18:08:13.804535   31648 command_runner.go:130] > # separated by comma.
	I1003 18:08:13.804544   31648 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1003 18:08:13.804551   31648 command_runner.go:130] > # uid_mappings = ""
	I1003 18:08:13.804557   31648 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1003 18:08:13.804564   31648 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1003 18:08:13.804569   31648 command_runner.go:130] > # separated by comma.
	I1003 18:08:13.804578   31648 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1003 18:08:13.804582   31648 command_runner.go:130] > # gid_mappings = ""
	I1003 18:08:13.804589   31648 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1003 18:08:13.804595   31648 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1003 18:08:13.804603   31648 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1003 18:08:13.804612   31648 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1003 18:08:13.804618   31648 command_runner.go:130] > # minimum_mappable_uid = -1
	I1003 18:08:13.804624   31648 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1003 18:08:13.804631   31648 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1003 18:08:13.804636   31648 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1003 18:08:13.804645   31648 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1003 18:08:13.804651   31648 command_runner.go:130] > # minimum_mappable_gid = -1
	I1003 18:08:13.804657   31648 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1003 18:08:13.804669   31648 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1003 18:08:13.804674   31648 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1003 18:08:13.804680   31648 command_runner.go:130] > # ctr_stop_timeout = 30
	I1003 18:08:13.804685   31648 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1003 18:08:13.804693   31648 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1003 18:08:13.804697   31648 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1003 18:08:13.804703   31648 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1003 18:08:13.804707   31648 command_runner.go:130] > # drop_infra_ctr = true
	I1003 18:08:13.804715   31648 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1003 18:08:13.804720   31648 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1003 18:08:13.804728   31648 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1003 18:08:13.804735   31648 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1003 18:08:13.804742   31648 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1003 18:08:13.804749   31648 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1003 18:08:13.804754   31648 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1003 18:08:13.804761   31648 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1003 18:08:13.804765   31648 command_runner.go:130] > # shared_cpuset = ""
	I1003 18:08:13.804773   31648 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1003 18:08:13.804777   31648 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1003 18:08:13.804783   31648 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1003 18:08:13.804789   31648 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1003 18:08:13.804795   31648 command_runner.go:130] > # pinns_path = ""
	I1003 18:08:13.804800   31648 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1003 18:08:13.804808   31648 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1003 18:08:13.804813   31648 command_runner.go:130] > # enable_criu_support = true
	I1003 18:08:13.804819   31648 command_runner.go:130] > # Enable/disable the generation of the container,
	I1003 18:08:13.804825   31648 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1003 18:08:13.804832   31648 command_runner.go:130] > # enable_pod_events = false
	I1003 18:08:13.804837   31648 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1003 18:08:13.804844   31648 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1003 18:08:13.804848   31648 command_runner.go:130] > # default_runtime = "crun"
	I1003 18:08:13.804855   31648 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1003 18:08:13.804862   31648 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1003 18:08:13.804874   31648 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1003 18:08:13.804881   31648 command_runner.go:130] > # creation as a file is not desired either.
	I1003 18:08:13.804889   31648 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1003 18:08:13.804896   31648 command_runner.go:130] > # the hostname is being managed dynamically.
	I1003 18:08:13.804900   31648 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1003 18:08:13.804905   31648 command_runner.go:130] > # ]
	I1003 18:08:13.804912   31648 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1003 18:08:13.804920   31648 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1003 18:08:13.804926   31648 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1003 18:08:13.804931   31648 command_runner.go:130] > # Each entry in the table should follow the format:
	I1003 18:08:13.804934   31648 command_runner.go:130] > #
	I1003 18:08:13.804941   31648 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1003 18:08:13.804945   31648 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1003 18:08:13.804952   31648 command_runner.go:130] > # runtime_type = "oci"
	I1003 18:08:13.804956   31648 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1003 18:08:13.804963   31648 command_runner.go:130] > # inherit_default_runtime = false
	I1003 18:08:13.804968   31648 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1003 18:08:13.804988   31648 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1003 18:08:13.804996   31648 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1003 18:08:13.805005   31648 command_runner.go:130] > # monitor_env = []
	I1003 18:08:13.805011   31648 command_runner.go:130] > # privileged_without_host_devices = false
	I1003 18:08:13.805017   31648 command_runner.go:130] > # allowed_annotations = []
	I1003 18:08:13.805022   31648 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1003 18:08:13.805028   31648 command_runner.go:130] > # no_sync_log = false
	I1003 18:08:13.805032   31648 command_runner.go:130] > # default_annotations = {}
	I1003 18:08:13.805038   31648 command_runner.go:130] > # stream_websockets = false
	I1003 18:08:13.805042   31648 command_runner.go:130] > # seccomp_profile = ""
	I1003 18:08:13.805062   31648 command_runner.go:130] > # Where:
	I1003 18:08:13.805069   31648 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1003 18:08:13.805075   31648 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1003 18:08:13.805081   31648 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1003 18:08:13.805089   31648 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1003 18:08:13.805092   31648 command_runner.go:130] > #   in $PATH.
	I1003 18:08:13.805100   31648 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1003 18:08:13.805105   31648 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1003 18:08:13.805112   31648 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1003 18:08:13.805115   31648 command_runner.go:130] > #   state.
	I1003 18:08:13.805121   31648 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1003 18:08:13.805128   31648 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1003 18:08:13.805133   31648 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1003 18:08:13.805141   31648 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1003 18:08:13.805146   31648 command_runner.go:130] > #   the values from the default runtime on load time.
	I1003 18:08:13.805153   31648 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1003 18:08:13.805158   31648 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1003 18:08:13.805165   31648 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1003 18:08:13.805177   31648 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1003 18:08:13.805183   31648 command_runner.go:130] > #   The currently recognized values are:
	I1003 18:08:13.805190   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1003 18:08:13.805199   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1003 18:08:13.805207   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1003 18:08:13.805214   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1003 18:08:13.805221   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1003 18:08:13.805229   31648 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1003 18:08:13.805235   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1003 18:08:13.805243   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1003 18:08:13.805251   31648 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1003 18:08:13.805257   31648 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1003 18:08:13.805265   31648 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1003 18:08:13.805273   31648 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1003 18:08:13.805278   31648 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1003 18:08:13.805285   31648 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1003 18:08:13.805291   31648 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1003 18:08:13.805300   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1003 18:08:13.805308   31648 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1003 18:08:13.805312   31648 command_runner.go:130] > #   deprecated option "conmon".
	I1003 18:08:13.805319   31648 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1003 18:08:13.805326   31648 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1003 18:08:13.805332   31648 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1003 18:08:13.805339   31648 command_runner.go:130] > #   should be moved to the container's cgroup
	I1003 18:08:13.805346   31648 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1003 18:08:13.805352   31648 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1003 18:08:13.805358   31648 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1003 18:08:13.805364   31648 command_runner.go:130] > #   conmon-rs by using:
	I1003 18:08:13.805370   31648 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1003 18:08:13.805379   31648 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1003 18:08:13.805388   31648 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1003 18:08:13.805395   31648 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1003 18:08:13.805401   31648 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1003 18:08:13.805415   31648 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1003 18:08:13.805423   31648 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1003 18:08:13.805430   31648 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1003 18:08:13.805437   31648 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1003 18:08:13.805449   31648 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1003 18:08:13.805455   31648 command_runner.go:130] > #   when a machine crash happens.
	I1003 18:08:13.805462   31648 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1003 18:08:13.805471   31648 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1003 18:08:13.805480   31648 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1003 18:08:13.805485   31648 command_runner.go:130] > #   seccomp profile for the runtime.
	I1003 18:08:13.805491   31648 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1003 18:08:13.805499   31648 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1003 18:08:13.805504   31648 command_runner.go:130] > #
	I1003 18:08:13.805508   31648 command_runner.go:130] > # Using the seccomp notifier feature:
	I1003 18:08:13.805513   31648 command_runner.go:130] > #
	I1003 18:08:13.805518   31648 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1003 18:08:13.805528   31648 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1003 18:08:13.805533   31648 command_runner.go:130] > #
	I1003 18:08:13.805539   31648 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1003 18:08:13.805547   31648 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1003 18:08:13.805549   31648 command_runner.go:130] > #
	I1003 18:08:13.805555   31648 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1003 18:08:13.805560   31648 command_runner.go:130] > # feature.
	I1003 18:08:13.805563   31648 command_runner.go:130] > #
	I1003 18:08:13.805568   31648 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1003 18:08:13.805576   31648 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1003 18:08:13.805582   31648 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1003 18:08:13.805589   31648 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1003 18:08:13.805595   31648 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1003 18:08:13.805600   31648 command_runner.go:130] > #
	I1003 18:08:13.805605   31648 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1003 18:08:13.805614   31648 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1003 18:08:13.805619   31648 command_runner.go:130] > #
	I1003 18:08:13.805625   31648 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1003 18:08:13.805632   31648 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1003 18:08:13.805635   31648 command_runner.go:130] > #
	I1003 18:08:13.805641   31648 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1003 18:08:13.805649   31648 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1003 18:08:13.805652   31648 command_runner.go:130] > # limitation.
	I1003 18:08:13.805656   31648 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1003 18:08:13.805666   31648 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1003 18:08:13.805671   31648 command_runner.go:130] > runtime_type = ""
	I1003 18:08:13.805675   31648 command_runner.go:130] > runtime_root = "/run/crun"
	I1003 18:08:13.805679   31648 command_runner.go:130] > inherit_default_runtime = false
	I1003 18:08:13.805683   31648 command_runner.go:130] > runtime_config_path = ""
	I1003 18:08:13.805689   31648 command_runner.go:130] > container_min_memory = ""
	I1003 18:08:13.805694   31648 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1003 18:08:13.805700   31648 command_runner.go:130] > monitor_cgroup = "pod"
	I1003 18:08:13.805704   31648 command_runner.go:130] > monitor_exec_cgroup = ""
	I1003 18:08:13.805710   31648 command_runner.go:130] > allowed_annotations = [
	I1003 18:08:13.805714   31648 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1003 18:08:13.805718   31648 command_runner.go:130] > ]
	I1003 18:08:13.805722   31648 command_runner.go:130] > privileged_without_host_devices = false
	I1003 18:08:13.805728   31648 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1003 18:08:13.805733   31648 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1003 18:08:13.805738   31648 command_runner.go:130] > runtime_type = ""
	I1003 18:08:13.805742   31648 command_runner.go:130] > runtime_root = "/run/runc"
	I1003 18:08:13.805748   31648 command_runner.go:130] > inherit_default_runtime = false
	I1003 18:08:13.805751   31648 command_runner.go:130] > runtime_config_path = ""
	I1003 18:08:13.805758   31648 command_runner.go:130] > container_min_memory = ""
	I1003 18:08:13.805762   31648 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1003 18:08:13.805767   31648 command_runner.go:130] > monitor_cgroup = "pod"
	I1003 18:08:13.805771   31648 command_runner.go:130] > monitor_exec_cgroup = ""
	I1003 18:08:13.805778   31648 command_runner.go:130] > privileged_without_host_devices = false
	I1003 18:08:13.805784   31648 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1003 18:08:13.805790   31648 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1003 18:08:13.805796   31648 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1003 18:08:13.805805   31648 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1003 18:08:13.805817   31648 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1003 18:08:13.805828   31648 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1003 18:08:13.805837   31648 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1003 18:08:13.805842   31648 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1003 18:08:13.805852   31648 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1003 18:08:13.805860   31648 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1003 18:08:13.805867   31648 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1003 18:08:13.805873   31648 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1003 18:08:13.805878   31648 command_runner.go:130] > # Example:
	I1003 18:08:13.805882   31648 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1003 18:08:13.805886   31648 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1003 18:08:13.805893   31648 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1003 18:08:13.805899   31648 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1003 18:08:13.805903   31648 command_runner.go:130] > # cpuset = "0-1"
	I1003 18:08:13.805906   31648 command_runner.go:130] > # cpushares = "5"
	I1003 18:08:13.805910   31648 command_runner.go:130] > # cpuquota = "1000"
	I1003 18:08:13.805919   31648 command_runner.go:130] > # cpuperiod = "100000"
	I1003 18:08:13.805924   31648 command_runner.go:130] > # cpulimit = "35"
	I1003 18:08:13.805933   31648 command_runner.go:130] > # Where:
	I1003 18:08:13.805940   31648 command_runner.go:130] > # The workload name is workload-type.
	I1003 18:08:13.805950   31648 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1003 18:08:13.805955   31648 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1003 18:08:13.805960   31648 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1003 18:08:13.805971   31648 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1003 18:08:13.805994   31648 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1003 18:08:13.806006   31648 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1003 18:08:13.806019   31648 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1003 18:08:13.806027   31648 command_runner.go:130] > # Default value is set to true
	I1003 18:08:13.806031   31648 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1003 18:08:13.806036   31648 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1003 18:08:13.806040   31648 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1003 18:08:13.806047   31648 command_runner.go:130] > # Default value is set to 'false'
	I1003 18:08:13.806052   31648 command_runner.go:130] > # disable_hostport_mapping = false
	I1003 18:08:13.806057   31648 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1003 18:08:13.806066   31648 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1003 18:08:13.806074   31648 command_runner.go:130] > # timezone = ""
	I1003 18:08:13.806085   31648 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1003 18:08:13.806093   31648 command_runner.go:130] > #
	I1003 18:08:13.806105   31648 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1003 18:08:13.806116   31648 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1003 18:08:13.806122   31648 command_runner.go:130] > [crio.image]
	I1003 18:08:13.806127   31648 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1003 18:08:13.806134   31648 command_runner.go:130] > # default_transport = "docker://"
	I1003 18:08:13.806139   31648 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1003 18:08:13.806147   31648 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1003 18:08:13.806154   31648 command_runner.go:130] > # global_auth_file = ""
	I1003 18:08:13.806159   31648 command_runner.go:130] > # The image used to instantiate infra containers.
	I1003 18:08:13.806165   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.806170   31648 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1003 18:08:13.806178   31648 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1003 18:08:13.806185   31648 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1003 18:08:13.806190   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.806196   31648 command_runner.go:130] > # pause_image_auth_file = ""
	I1003 18:08:13.806202   31648 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1003 18:08:13.806209   31648 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1003 18:08:13.806215   31648 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1003 18:08:13.806220   31648 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1003 18:08:13.806226   31648 command_runner.go:130] > # pause_command = "/pause"
	I1003 18:08:13.806231   31648 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1003 18:08:13.806239   31648 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1003 18:08:13.806244   31648 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1003 18:08:13.806252   31648 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1003 18:08:13.806257   31648 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1003 18:08:13.806264   31648 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1003 18:08:13.806268   31648 command_runner.go:130] > # pinned_images = [
	I1003 18:08:13.806271   31648 command_runner.go:130] > # ]
	I1003 18:08:13.806278   31648 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1003 18:08:13.806286   31648 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1003 18:08:13.806293   31648 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1003 18:08:13.806301   31648 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1003 18:08:13.806306   31648 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1003 18:08:13.806312   31648 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1003 18:08:13.806318   31648 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1003 18:08:13.806325   31648 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1003 18:08:13.806333   31648 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1003 18:08:13.806341   31648 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1003 18:08:13.806347   31648 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1003 18:08:13.806353   31648 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1003 18:08:13.806358   31648 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1003 18:08:13.806366   31648 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1003 18:08:13.806369   31648 command_runner.go:130] > # changing them here.
	I1003 18:08:13.806374   31648 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1003 18:08:13.806380   31648 command_runner.go:130] > # insecure_registries = [
	I1003 18:08:13.806383   31648 command_runner.go:130] > # ]
	I1003 18:08:13.806391   31648 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1003 18:08:13.806398   31648 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1003 18:08:13.806404   31648 command_runner.go:130] > # image_volumes = "mkdir"
	I1003 18:08:13.806409   31648 command_runner.go:130] > # Temporary directory to use for storing big files
	I1003 18:08:13.806415   31648 command_runner.go:130] > # big_files_temporary_dir = ""
	I1003 18:08:13.806420   31648 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1003 18:08:13.806429   31648 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1003 18:08:13.806435   31648 command_runner.go:130] > # auto_reload_registries = false
	I1003 18:08:13.806441   31648 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1003 18:08:13.806450   31648 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1003 18:08:13.806467   31648 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1003 18:08:13.806473   31648 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1003 18:08:13.806477   31648 command_runner.go:130] > # The mode of short name resolution.
	I1003 18:08:13.806484   31648 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1003 18:08:13.806492   31648 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1003 18:08:13.806499   31648 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1003 18:08:13.806503   31648 command_runner.go:130] > # short_name_mode = "enforcing"
	I1003 18:08:13.806511   31648 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1003 18:08:13.806518   31648 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1003 18:08:13.806523   31648 command_runner.go:130] > # oci_artifact_mount_support = true
	I1003 18:08:13.806530   31648 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1003 18:08:13.806535   31648 command_runner.go:130] > # CNI plugins.
	I1003 18:08:13.806541   31648 command_runner.go:130] > [crio.network]
	I1003 18:08:13.806546   31648 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1003 18:08:13.806553   31648 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1003 18:08:13.806557   31648 command_runner.go:130] > # cni_default_network = ""
	I1003 18:08:13.806562   31648 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1003 18:08:13.806568   31648 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1003 18:08:13.806573   31648 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1003 18:08:13.806580   31648 command_runner.go:130] > # plugin_dirs = [
	I1003 18:08:13.806584   31648 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1003 18:08:13.806589   31648 command_runner.go:130] > # ]
	I1003 18:08:13.806593   31648 command_runner.go:130] > # List of included pod metrics.
	I1003 18:08:13.806599   31648 command_runner.go:130] > # included_pod_metrics = [
	I1003 18:08:13.806603   31648 command_runner.go:130] > # ]
	I1003 18:08:13.806610   31648 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1003 18:08:13.806614   31648 command_runner.go:130] > [crio.metrics]
	I1003 18:08:13.806618   31648 command_runner.go:130] > # Globally enable or disable metrics support.
	I1003 18:08:13.806624   31648 command_runner.go:130] > # enable_metrics = false
	I1003 18:08:13.806629   31648 command_runner.go:130] > # Specify enabled metrics collectors.
	I1003 18:08:13.806635   31648 command_runner.go:130] > # Per default all metrics are enabled.
	I1003 18:08:13.806640   31648 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1003 18:08:13.806647   31648 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1003 18:08:13.806654   31648 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1003 18:08:13.806662   31648 command_runner.go:130] > # metrics_collectors = [
	I1003 18:08:13.806668   31648 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1003 18:08:13.806672   31648 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1003 18:08:13.806676   31648 command_runner.go:130] > # 	"containers_oom_total",
	I1003 18:08:13.806679   31648 command_runner.go:130] > # 	"processes_defunct",
	I1003 18:08:13.806682   31648 command_runner.go:130] > # 	"operations_total",
	I1003 18:08:13.806687   31648 command_runner.go:130] > # 	"operations_latency_seconds",
	I1003 18:08:13.806691   31648 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1003 18:08:13.806694   31648 command_runner.go:130] > # 	"operations_errors_total",
	I1003 18:08:13.806697   31648 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1003 18:08:13.806701   31648 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1003 18:08:13.806705   31648 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1003 18:08:13.806709   31648 command_runner.go:130] > # 	"image_pulls_success_total",
	I1003 18:08:13.806713   31648 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1003 18:08:13.806716   31648 command_runner.go:130] > # 	"containers_oom_count_total",
	I1003 18:08:13.806720   31648 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1003 18:08:13.806724   31648 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1003 18:08:13.806728   31648 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1003 18:08:13.806730   31648 command_runner.go:130] > # ]
	I1003 18:08:13.806736   31648 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1003 18:08:13.806739   31648 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1003 18:08:13.806744   31648 command_runner.go:130] > # The port on which the metrics server will listen.
	I1003 18:08:13.806747   31648 command_runner.go:130] > # metrics_port = 9090
	I1003 18:08:13.806751   31648 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1003 18:08:13.806755   31648 command_runner.go:130] > # metrics_socket = ""
	I1003 18:08:13.806759   31648 command_runner.go:130] > # The certificate for the secure metrics server.
	I1003 18:08:13.806765   31648 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1003 18:08:13.806770   31648 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1003 18:08:13.806774   31648 command_runner.go:130] > # certificate on any modification event.
	I1003 18:08:13.806780   31648 command_runner.go:130] > # metrics_cert = ""
	I1003 18:08:13.806785   31648 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1003 18:08:13.806791   31648 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1003 18:08:13.806795   31648 command_runner.go:130] > # metrics_key = ""
	I1003 18:08:13.806802   31648 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1003 18:08:13.806805   31648 command_runner.go:130] > [crio.tracing]
	I1003 18:08:13.806810   31648 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1003 18:08:13.806816   31648 command_runner.go:130] > # enable_tracing = false
	I1003 18:08:13.806821   31648 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1003 18:08:13.806827   31648 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1003 18:08:13.806834   31648 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1003 18:08:13.806841   31648 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1003 18:08:13.806845   31648 command_runner.go:130] > # CRI-O NRI configuration.
	I1003 18:08:13.806850   31648 command_runner.go:130] > [crio.nri]
	I1003 18:08:13.806854   31648 command_runner.go:130] > # Globally enable or disable NRI.
	I1003 18:08:13.806860   31648 command_runner.go:130] > # enable_nri = true
	I1003 18:08:13.806864   31648 command_runner.go:130] > # NRI socket to listen on.
	I1003 18:08:13.806870   31648 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1003 18:08:13.806874   31648 command_runner.go:130] > # NRI plugin directory to use.
	I1003 18:08:13.806880   31648 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1003 18:08:13.806885   31648 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1003 18:08:13.806891   31648 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1003 18:08:13.806896   31648 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1003 18:08:13.806926   31648 command_runner.go:130] > # nri_disable_connections = false
	I1003 18:08:13.806934   31648 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1003 18:08:13.806938   31648 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1003 18:08:13.806944   31648 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1003 18:08:13.806948   31648 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1003 18:08:13.806955   31648 command_runner.go:130] > # NRI default validator configuration.
	I1003 18:08:13.806961   31648 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1003 18:08:13.806968   31648 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1003 18:08:13.806972   31648 command_runner.go:130] > # can be restricted/rejected:
	I1003 18:08:13.806990   31648 command_runner.go:130] > # - OCI hook injection
	I1003 18:08:13.806998   31648 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1003 18:08:13.807007   31648 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1003 18:08:13.807014   31648 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1003 18:08:13.807024   31648 command_runner.go:130] > # - adjustment of linux namespaces
	I1003 18:08:13.807033   31648 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1003 18:08:13.807041   31648 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1003 18:08:13.807046   31648 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1003 18:08:13.807051   31648 command_runner.go:130] > #
	I1003 18:08:13.807055   31648 command_runner.go:130] > # [crio.nri.default_validator]
	I1003 18:08:13.807060   31648 command_runner.go:130] > # nri_enable_default_validator = false
	I1003 18:08:13.807067   31648 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1003 18:08:13.807072   31648 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1003 18:08:13.807079   31648 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1003 18:08:13.807083   31648 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1003 18:08:13.807088   31648 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1003 18:08:13.807094   31648 command_runner.go:130] > # nri_validator_required_plugins = [
	I1003 18:08:13.807097   31648 command_runner.go:130] > # ]
	I1003 18:08:13.807104   31648 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1003 18:08:13.807109   31648 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1003 18:08:13.807115   31648 command_runner.go:130] > [crio.stats]
	I1003 18:08:13.807121   31648 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1003 18:08:13.807128   31648 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1003 18:08:13.807132   31648 command_runner.go:130] > # stats_collection_period = 0
	I1003 18:08:13.807141   31648 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1003 18:08:13.807147   31648 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1003 18:08:13.807154   31648 command_runner.go:130] > # collection_period = 0
	I1003 18:08:13.807173   31648 command_runner.go:130] ! time="2025-10-03T18:08:13.78773481Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1003 18:08:13.807183   31648 command_runner.go:130] ! time="2025-10-03T18:08:13.787758775Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1003 18:08:13.807194   31648 command_runner.go:130] ! time="2025-10-03T18:08:13.787775454Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1003 18:08:13.807203   31648 command_runner.go:130] ! time="2025-10-03T18:08:13.78779273Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1003 18:08:13.807213   31648 command_runner.go:130] ! time="2025-10-03T18:08:13.7878475Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.807222   31648 command_runner.go:130] ! time="2025-10-03T18:08:13.788021357Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1003 18:08:13.807234   31648 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1003 18:08:13.807290   31648 cni.go:84] Creating CNI manager for ""
	I1003 18:08:13.807303   31648 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 18:08:13.807321   31648 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 18:08:13.807344   31648 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-889240 NodeName:functional-889240 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 18:08:13.807460   31648 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-889240"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:08:13.807513   31648 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 18:08:13.814815   31648 command_runner.go:130] > kubeadm
	I1003 18:08:13.814829   31648 command_runner.go:130] > kubectl
	I1003 18:08:13.814834   31648 command_runner.go:130] > kubelet
	I1003 18:08:13.815427   31648 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:08:13.815489   31648 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 18:08:13.822648   31648 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1003 18:08:13.834615   31648 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 18:08:13.846006   31648 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1003 18:08:13.857402   31648 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1003 18:08:13.860916   31648 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1003 18:08:13.860998   31648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:08:13.942536   31648 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:08:13.955386   31648 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240 for IP: 192.168.49.2
	I1003 18:08:13.955406   31648 certs.go:195] generating shared ca certs ...
	I1003 18:08:13.955424   31648 certs.go:227] acquiring lock for ca certs: {Name:mk92d1e8e469cb44d9924ff8abf5ecf0a8ce4e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:08:13.955571   31648 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key
	I1003 18:08:13.955642   31648 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key
	I1003 18:08:13.955660   31648 certs.go:257] generating profile certs ...
	I1003 18:08:13.955770   31648 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.key
	I1003 18:08:13.955933   31648 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.key.eb3f8f7c
	I1003 18:08:13.956034   31648 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.key
	I1003 18:08:13.956049   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 18:08:13.956072   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 18:08:13.956090   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 18:08:13.956107   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 18:08:13.956123   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 18:08:13.956140   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 18:08:13.956160   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 18:08:13.956185   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 18:08:13.956244   31648 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem (1338 bytes)
	W1003 18:08:13.956286   31648 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212_empty.pem, impossibly tiny 0 bytes
	I1003 18:08:13.956298   31648 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:08:13.956331   31648 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:08:13.956364   31648 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:08:13.956397   31648 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem (1675 bytes)
	I1003 18:08:13.956451   31648 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:08:13.956487   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem -> /usr/share/ca-certificates/12212.pem
	I1003 18:08:13.956507   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /usr/share/ca-certificates/122122.pem
	I1003 18:08:13.956528   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:08:13.957144   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:08:13.973779   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 18:08:13.990161   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:08:14.006157   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:08:14.022253   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 18:08:14.038198   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:08:14.054095   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:08:14.069959   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 18:08:14.085810   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem --> /usr/share/ca-certificates/12212.pem (1338 bytes)
	I1003 18:08:14.101812   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /usr/share/ca-certificates/122122.pem (1708 bytes)
	I1003 18:08:14.117716   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:08:14.134093   31648 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:08:14.145835   31648 ssh_runner.go:195] Run: openssl version
	I1003 18:08:14.151369   31648 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1003 18:08:14.151660   31648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122122.pem && ln -fs /usr/share/ca-certificates/122122.pem /etc/ssl/certs/122122.pem"
	I1003 18:08:14.160011   31648 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122122.pem
	I1003 18:08:14.163572   31648 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:08:14.163595   31648 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:08:14.163631   31648 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122122.pem
	I1003 18:08:14.196823   31648 command_runner.go:130] > 3ec20f2e
	I1003 18:08:14.197073   31648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122122.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:08:14.204835   31648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:08:14.212908   31648 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:08:14.216400   31648 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:08:14.216425   31648 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:08:14.216454   31648 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:08:14.249946   31648 command_runner.go:130] > b5213941
	I1003 18:08:14.250032   31648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:08:14.257940   31648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12212.pem && ln -fs /usr/share/ca-certificates/12212.pem /etc/ssl/certs/12212.pem"
	I1003 18:08:14.266302   31648 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12212.pem
	I1003 18:08:14.269939   31648 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:08:14.269964   31648 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:08:14.270013   31648 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12212.pem
	I1003 18:08:14.303247   31648 command_runner.go:130] > 51391683
	I1003 18:08:14.303479   31648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12212.pem /etc/ssl/certs/51391683.0"
	I1003 18:08:14.311263   31648 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:08:14.314772   31648 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:08:14.314798   31648 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1003 18:08:14.314807   31648 command_runner.go:130] > Device: 8,1	Inode: 579409      Links: 1
	I1003 18:08:14.314815   31648 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1003 18:08:14.314823   31648 command_runner.go:130] > Access: 2025-10-03 18:04:07.266428775 +0000
	I1003 18:08:14.314828   31648 command_runner.go:130] > Modify: 2025-10-03 18:00:02.305264452 +0000
	I1003 18:08:14.314842   31648 command_runner.go:130] > Change: 2025-10-03 18:00:02.305264452 +0000
	I1003 18:08:14.314851   31648 command_runner.go:130] >  Birth: 2025-10-03 18:00:02.305264452 +0000
	I1003 18:08:14.314920   31648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 18:08:14.349195   31648 command_runner.go:130] > Certificate will not expire
	I1003 18:08:14.349493   31648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 18:08:14.382820   31648 command_runner.go:130] > Certificate will not expire
	I1003 18:08:14.383063   31648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 18:08:14.416849   31648 command_runner.go:130] > Certificate will not expire
	I1003 18:08:14.416933   31648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 18:08:14.450508   31648 command_runner.go:130] > Certificate will not expire
	I1003 18:08:14.450572   31648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 18:08:14.483927   31648 command_runner.go:130] > Certificate will not expire
	I1003 18:08:14.484012   31648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 18:08:14.517658   31648 command_runner.go:130] > Certificate will not expire
	I1003 18:08:14.518008   31648 kubeadm.go:400] StartCluster: {Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:08:14.518097   31648 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:08:14.518174   31648 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:08:14.544326   31648 cri.go:89] found id: ""
	I1003 18:08:14.544381   31648 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:08:14.551440   31648 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1003 18:08:14.551457   31648 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1003 18:08:14.551463   31648 command_runner.go:130] > /var/lib/minikube/etcd:
	I1003 18:08:14.551962   31648 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1003 18:08:14.551995   31648 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1003 18:08:14.552044   31648 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 18:08:14.559024   31648 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:08:14.559104   31648 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-889240" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:08:14.559135   31648 kubeconfig.go:62] /home/jenkins/minikube-integration/21625-8669/kubeconfig needs updating (will repair): [kubeconfig missing "functional-889240" cluster setting kubeconfig missing "functional-889240" context setting]
	I1003 18:08:14.559426   31648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/kubeconfig: {Name:mk6b7939515483ba69c1f358a3a21494f4ead7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:08:14.562686   31648 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:08:14.562840   31648 kapi.go:59] client config for functional-889240: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.key", CAFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 18:08:14.563280   31648 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1003 18:08:14.563295   31648 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1003 18:08:14.563300   31648 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1003 18:08:14.563305   31648 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1003 18:08:14.563310   31648 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1003 18:08:14.563344   31648 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1003 18:08:14.563668   31648 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 18:08:14.571379   31648 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1003 18:08:14.571411   31648 kubeadm.go:601] duration metric: took 19.407047ms to restartPrimaryControlPlane
	I1003 18:08:14.571423   31648 kubeadm.go:402] duration metric: took 53.42211ms to StartCluster
	I1003 18:08:14.571440   31648 settings.go:142] acquiring lock: {Name:mk6bc950503a8f341b8aacc07a8bc72d5db3a25c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:08:14.571546   31648 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:08:14.572080   31648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/kubeconfig: {Name:mk6b7939515483ba69c1f358a3a21494f4ead7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:08:14.572261   31648 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 18:08:14.572328   31648 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 18:08:14.572418   31648 addons.go:69] Setting storage-provisioner=true in profile "functional-889240"
	I1003 18:08:14.572440   31648 addons.go:238] Setting addon storage-provisioner=true in "functional-889240"
	I1003 18:08:14.572443   31648 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:08:14.572454   31648 addons.go:69] Setting default-storageclass=true in profile "functional-889240"
	I1003 18:08:14.572472   31648 host.go:66] Checking if "functional-889240" exists ...
	I1003 18:08:14.572481   31648 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-889240"
	I1003 18:08:14.572708   31648 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
	I1003 18:08:14.572822   31648 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
	I1003 18:08:14.574934   31648 out.go:179] * Verifying Kubernetes components...
	I1003 18:08:14.575948   31648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:08:14.591352   31648 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:08:14.591562   31648 kapi.go:59] client config for functional-889240: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.key", CAFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 18:08:14.591895   31648 addons.go:238] Setting addon default-storageclass=true in "functional-889240"
	I1003 18:08:14.591927   31648 host.go:66] Checking if "functional-889240" exists ...
	I1003 18:08:14.592300   31648 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
	I1003 18:08:14.592939   31648 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 18:08:14.594638   31648 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:14.594655   31648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 18:08:14.594693   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:14.617423   31648 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:14.617446   31648 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 18:08:14.617507   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:14.620273   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:14.639039   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:14.672807   31648 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:08:14.684788   31648 node_ready.go:35] waiting up to 6m0s for node "functional-889240" to be "Ready" ...
	I1003 18:08:14.684921   31648 type.go:168] "Request Body" body=""
	I1003 18:08:14.685003   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:14.685252   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:14.730950   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:14.745066   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:14.786328   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:14.786378   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:14.786409   31648 retry.go:31] will retry after 270.951246ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:14.798186   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:14.798232   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:14.798258   31648 retry.go:31] will retry after 360.152106ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.057602   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:15.106841   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:15.109109   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.109138   31648 retry.go:31] will retry after 397.537911ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.159331   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:15.185817   31648 type.go:168] "Request Body" body=""
	I1003 18:08:15.185883   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:15.186219   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:15.210176   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:15.210221   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.210238   31648 retry.go:31] will retry after 493.012433ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.507675   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:15.555577   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:15.557666   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.557696   31648 retry.go:31] will retry after 440.122822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.685949   31648 type.go:168] "Request Body" body=""
	I1003 18:08:15.686038   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:15.686370   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:15.703496   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:15.753710   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:15.753758   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.753776   31648 retry.go:31] will retry after 795.152031ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.998073   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:16.047743   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:16.047782   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:16.047802   31648 retry.go:31] will retry after 705.62402ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:16.185279   31648 type.go:168] "Request Body" body=""
	I1003 18:08:16.185360   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:16.185691   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:16.549101   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:16.597196   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:16.599345   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:16.599377   31648 retry.go:31] will retry after 940.255489ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:16.685633   31648 type.go:168] "Request Body" body=""
	I1003 18:08:16.685701   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:16.685999   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:16.686058   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:16.754204   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:16.801452   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:16.803457   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:16.803489   31648 retry.go:31] will retry after 1.24021873s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:17.184970   31648 type.go:168] "Request Body" body=""
	I1003 18:08:17.185063   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:17.185424   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:17.539832   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:17.590758   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:17.590802   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:17.590823   31648 retry.go:31] will retry after 1.395425458s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:17.685012   31648 type.go:168] "Request Body" body=""
	I1003 18:08:17.685095   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:17.685454   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:18.043958   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:18.094735   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:18.094776   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:18.094793   31648 retry.go:31] will retry after 1.596032935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:18.185003   31648 type.go:168] "Request Body" body=""
	I1003 18:08:18.185100   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:18.185407   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:18.685017   31648 type.go:168] "Request Body" body=""
	I1003 18:08:18.685100   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:18.685393   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:18.986876   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:19.035593   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:19.038332   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:19.038363   31648 retry.go:31] will retry after 1.200373965s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:19.185671   31648 type.go:168] "Request Body" body=""
	I1003 18:08:19.185764   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:19.186105   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:19.186155   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:19.686009   31648 type.go:168] "Request Body" body=""
	I1003 18:08:19.686091   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:19.686423   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:19.691557   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:19.741190   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:19.743532   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:19.743567   31648 retry.go:31] will retry after 3.569328126s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:20.185118   31648 type.go:168] "Request Body" body=""
	I1003 18:08:20.185184   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:20.185523   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:20.239734   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:20.289529   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:20.291706   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:20.291741   31648 retry.go:31] will retry after 1.81500567s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:20.685251   31648 type.go:168] "Request Body" body=""
	I1003 18:08:20.685325   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:20.685635   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:21.185510   31648 type.go:168] "Request Body" body=""
	I1003 18:08:21.185583   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:21.185888   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:21.685727   31648 type.go:168] "Request Body" body=""
	I1003 18:08:21.685836   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:21.686208   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:21.686275   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:22.107768   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:22.158032   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:22.158081   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:22.158100   31648 retry.go:31] will retry after 3.676335527s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:22.185231   31648 type.go:168] "Request Body" body=""
	I1003 18:08:22.185319   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:22.185614   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:22.685370   31648 type.go:168] "Request Body" body=""
	I1003 18:08:22.685451   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:22.685806   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:23.185639   31648 type.go:168] "Request Body" body=""
	I1003 18:08:23.185743   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:23.186048   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:23.313354   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:23.364461   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:23.364519   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:23.364543   31648 retry.go:31] will retry after 3.926696561s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:23.685958   31648 type.go:168] "Request Body" body=""
	I1003 18:08:23.686044   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:23.686339   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:23.686396   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:24.186039   31648 type.go:168] "Request Body" body=""
	I1003 18:08:24.186135   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:24.186455   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:24.685152   31648 type.go:168] "Request Body" body=""
	I1003 18:08:24.685228   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:24.685576   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:25.185310   31648 type.go:168] "Request Body" body=""
	I1003 18:08:25.185375   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:25.185715   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:25.685392   31648 type.go:168] "Request Body" body=""
	I1003 18:08:25.685465   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:25.685774   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:25.835120   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:25.883846   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:25.886330   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:25.886360   31648 retry.go:31] will retry after 9.086319041s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:26.185864   31648 type.go:168] "Request Body" body=""
	I1003 18:08:26.185950   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:26.186312   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:26.186362   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:26.685071   31648 type.go:168] "Request Body" body=""
	I1003 18:08:26.685149   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:26.685486   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:27.185231   31648 type.go:168] "Request Body" body=""
	I1003 18:08:27.185303   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:27.185670   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:27.291951   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:27.344646   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:27.344705   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:27.344728   31648 retry.go:31] will retry after 9.233335187s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:27.685027   31648 type.go:168] "Request Body" body=""
	I1003 18:08:27.685131   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:27.685438   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:28.185051   31648 type.go:168] "Request Body" body=""
	I1003 18:08:28.185123   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:28.185416   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:28.685061   31648 type.go:168] "Request Body" body=""
	I1003 18:08:28.685136   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:28.685436   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:28.685488   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:29.185050   31648 type.go:168] "Request Body" body=""
	I1003 18:08:29.185116   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:29.185410   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:29.685011   31648 type.go:168] "Request Body" body=""
	I1003 18:08:29.685107   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:29.685414   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:30.185028   31648 type.go:168] "Request Body" body=""
	I1003 18:08:30.185114   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:30.185401   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:30.685020   31648 type.go:168] "Request Body" body=""
	I1003 18:08:30.685097   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:30.685428   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:31.185273   31648 type.go:168] "Request Body" body=""
	I1003 18:08:31.185345   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:31.185680   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:31.185733   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:31.685419   31648 type.go:168] "Request Body" body=""
	I1003 18:08:31.685507   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:31.685800   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:32.185743   31648 type.go:168] "Request Body" body=""
	I1003 18:08:32.185852   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:32.186217   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:32.684952   31648 type.go:168] "Request Body" body=""
	I1003 18:08:32.685038   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:32.685332   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:33.185084   31648 type.go:168] "Request Body" body=""
	I1003 18:08:33.185176   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:33.185536   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:33.685288   31648 type.go:168] "Request Body" body=""
	I1003 18:08:33.685369   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:33.685664   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:33.685725   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:34.185445   31648 type.go:168] "Request Body" body=""
	I1003 18:08:34.185522   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:34.185879   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:34.685599   31648 type.go:168] "Request Body" body=""
	I1003 18:08:34.685698   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:34.686052   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:34.973491   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:35.025995   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:35.026042   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:35.026060   31648 retry.go:31] will retry after 13.835197481s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:35.185336   31648 type.go:168] "Request Body" body=""
	I1003 18:08:35.185419   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:35.185713   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:35.685344   31648 type.go:168] "Request Body" body=""
	I1003 18:08:35.685434   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:35.685770   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:35.685857   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:36.185648   31648 type.go:168] "Request Body" body=""
	I1003 18:08:36.185719   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:36.186013   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:36.578491   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:36.629045   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:36.629094   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:36.629123   31648 retry.go:31] will retry after 7.439097167s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:36.685279   31648 type.go:168] "Request Body" body=""
	I1003 18:08:36.685356   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:36.685671   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:37.185440   31648 type.go:168] "Request Body" body=""
	I1003 18:08:37.185503   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:37.185805   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:37.685609   31648 type.go:168] "Request Body" body=""
	I1003 18:08:37.685705   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:37.686055   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:37.686118   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:38.185875   31648 type.go:168] "Request Body" body=""
	I1003 18:08:38.185966   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:38.186273   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:38.685047   31648 type.go:168] "Request Body" body=""
	I1003 18:08:38.685111   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:38.685422   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:39.185132   31648 type.go:168] "Request Body" body=""
	I1003 18:08:39.185219   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:39.185524   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:39.685244   31648 type.go:168] "Request Body" body=""
	I1003 18:08:39.685308   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:39.685620   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:40.185346   31648 type.go:168] "Request Body" body=""
	I1003 18:08:40.185409   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:40.185703   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:40.185782   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:40.685452   31648 type.go:168] "Request Body" body=""
	I1003 18:08:40.685560   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:40.685889   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:41.185504   31648 type.go:168] "Request Body" body=""
	I1003 18:08:41.185583   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:41.185889   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:41.685695   31648 type.go:168] "Request Body" body=""
	I1003 18:08:41.685767   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:41.686090   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:42.185782   31648 type.go:168] "Request Body" body=""
	I1003 18:08:42.185862   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:42.186224   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:42.186281   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:42.685859   31648 type.go:168] "Request Body" body=""
	I1003 18:08:42.685952   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:42.686271   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:43.185893   31648 type.go:168] "Request Body" body=""
	I1003 18:08:43.185999   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:43.186296   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:43.685944   31648 type.go:168] "Request Body" body=""
	I1003 18:08:43.686017   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:43.686309   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:44.068807   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:44.118932   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:44.118993   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:44.119018   31648 retry.go:31] will retry after 11.649333138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:44.185207   31648 type.go:168] "Request Body" body=""
	I1003 18:08:44.185271   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:44.185562   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:44.685354   31648 type.go:168] "Request Body" body=""
	I1003 18:08:44.685421   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:44.685759   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:44.685811   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:45.185341   31648 type.go:168] "Request Body" body=""
	I1003 18:08:45.185433   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:45.185739   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:45.685457   31648 type.go:168] "Request Body" body=""
	I1003 18:08:45.685529   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:45.685878   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:46.185715   31648 type.go:168] "Request Body" body=""
	I1003 18:08:46.185814   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:46.186178   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:46.685956   31648 type.go:168] "Request Body" body=""
	I1003 18:08:46.686040   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:46.686342   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:46.686417   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:47.185108   31648 type.go:168] "Request Body" body=""
	I1003 18:08:47.185173   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:47.185454   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:47.685185   31648 type.go:168] "Request Body" body=""
	I1003 18:08:47.685263   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:47.685629   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:48.185337   31648 type.go:168] "Request Body" body=""
	I1003 18:08:48.185401   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:48.185716   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:48.685423   31648 type.go:168] "Request Body" body=""
	I1003 18:08:48.685491   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:48.685791   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:48.862137   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:48.911551   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:48.911612   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:48.911635   31648 retry.go:31] will retry after 10.230842759s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:49.184986   31648 type.go:168] "Request Body" body=""
	I1003 18:08:49.185056   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:49.185386   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:49.185450   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:49.685132   31648 type.go:168] "Request Body" body=""
	I1003 18:08:49.685197   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:49.685528   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:50.185253   31648 type.go:168] "Request Body" body=""
	I1003 18:08:50.185319   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:50.185649   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:50.685352   31648 type.go:168] "Request Body" body=""
	I1003 18:08:50.685456   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:50.685777   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:51.185614   31648 type.go:168] "Request Body" body=""
	I1003 18:08:51.185727   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:51.186089   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:51.186142   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:51.685865   31648 type.go:168] "Request Body" body=""
	I1003 18:08:51.685970   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:51.686292   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:52.185039   31648 type.go:168] "Request Body" body=""
	I1003 18:08:52.185145   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:52.185488   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:52.685238   31648 type.go:168] "Request Body" body=""
	I1003 18:08:52.685302   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:52.685617   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:53.185313   31648 type.go:168] "Request Body" body=""
	I1003 18:08:53.185377   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:53.185697   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:53.685459   31648 type.go:168] "Request Body" body=""
	I1003 18:08:53.685528   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:53.685880   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:53.685930   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:54.185736   31648 type.go:168] "Request Body" body=""
	I1003 18:08:54.185800   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:54.186122   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:54.685875   31648 type.go:168] "Request Body" body=""
	I1003 18:08:54.685940   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:54.686284   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:55.185038   31648 type.go:168] "Request Body" body=""
	I1003 18:08:55.185103   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:55.185420   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:55.685122   31648 type.go:168] "Request Body" body=""
	I1003 18:08:55.685213   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:55.685505   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:55.768789   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:55.820187   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:55.820247   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:55.820271   31648 retry.go:31] will retry after 17.817355848s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:56.185846   31648 type.go:168] "Request Body" body=""
	I1003 18:08:56.185913   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:56.186233   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:56.186374   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:56.685948   31648 type.go:168] "Request Body" body=""
	I1003 18:08:56.686081   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:56.686423   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:57.185019   31648 type.go:168] "Request Body" body=""
	I1003 18:08:57.185105   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:57.185399   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:57.684931   31648 type.go:168] "Request Body" body=""
	I1003 18:08:57.685041   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:57.685319   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:58.185047   31648 type.go:168] "Request Body" body=""
	I1003 18:08:58.185109   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:58.185402   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:58.685125   31648 type.go:168] "Request Body" body=""
	I1003 18:08:58.685211   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:58.685543   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:58.685617   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:59.143069   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:59.185821   31648 type.go:168] "Request Body" body=""
	I1003 18:08:59.185917   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:59.186232   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:59.193474   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:59.193510   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:59.193527   31648 retry.go:31] will retry after 25.255183485s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:59.685108   31648 type.go:168] "Request Body" body=""
	I1003 18:08:59.685198   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:59.685504   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:00.185069   31648 type.go:168] "Request Body" body=""
	I1003 18:09:00.185163   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:00.185465   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:00.685045   31648 type.go:168] "Request Body" body=""
	I1003 18:09:00.685107   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:00.685401   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:01.185250   31648 type.go:168] "Request Body" body=""
	I1003 18:09:01.185349   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:01.185688   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:01.185754   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:01.685310   31648 type.go:168] "Request Body" body=""
	I1003 18:09:01.685402   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:01.685720   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:02.185253   31648 type.go:168] "Request Body" body=""
	I1003 18:09:02.185346   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:02.185664   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:02.685182   31648 type.go:168] "Request Body" body=""
	I1003 18:09:02.685247   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:02.685567   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:03.185121   31648 type.go:168] "Request Body" body=""
	I1003 18:09:03.185184   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:03.185472   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:03.685069   31648 type.go:168] "Request Body" body=""
	I1003 18:09:03.685140   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:03.685473   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:03.685548   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:04.185138   31648 type.go:168] "Request Body" body=""
	I1003 18:09:04.185208   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:04.185511   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:04.685397   31648 type.go:168] "Request Body" body=""
	I1003 18:09:04.685498   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:04.685815   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:05.185368   31648 type.go:168] "Request Body" body=""
	I1003 18:09:05.185430   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:05.185752   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:05.685306   31648 type.go:168] "Request Body" body=""
	I1003 18:09:05.685399   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:05.685722   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:05.685773   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:06.185506   31648 type.go:168] "Request Body" body=""
	I1003 18:09:06.185596   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:06.185889   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:06.685509   31648 type.go:168] "Request Body" body=""
	I1003 18:09:06.685600   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:06.685920   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:07.185528   31648 type.go:168] "Request Body" body=""
	I1003 18:09:07.185591   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:07.185930   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:07.685592   31648 type.go:168] "Request Body" body=""
	I1003 18:09:07.685666   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:07.686000   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:07.686050   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:08.185578   31648 type.go:168] "Request Body" body=""
	I1003 18:09:08.185676   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:08.185969   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:08.685655   31648 type.go:168] "Request Body" body=""
	I1003 18:09:08.685728   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:08.686124   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:09.185744   31648 type.go:168] "Request Body" body=""
	I1003 18:09:09.185811   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:09.186109   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:09.685870   31648 type.go:168] "Request Body" body=""
	I1003 18:09:09.685938   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:09.686249   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:09.686300   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:10.185899   31648 type.go:168] "Request Body" body=""
	I1003 18:09:10.185995   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:10.186296   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:10.684943   31648 type.go:168] "Request Body" body=""
	I1003 18:09:10.685033   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:10.685323   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:11.185004   31648 type.go:168] "Request Body" body=""
	I1003 18:09:11.185066   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:11.185370   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:11.684959   31648 type.go:168] "Request Body" body=""
	I1003 18:09:11.685050   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:11.685368   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:12.184955   31648 type.go:168] "Request Body" body=""
	I1003 18:09:12.185063   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:12.185367   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:12.185420   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:12.684941   31648 type.go:168] "Request Body" body=""
	I1003 18:09:12.685054   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:12.685356   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:13.185955   31648 type.go:168] "Request Body" body=""
	I1003 18:09:13.186031   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:13.186349   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:13.637912   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:09:13.685539   31648 type.go:168] "Request Body" body=""
	I1003 18:09:13.685624   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:13.685989   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:13.686249   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:09:13.688536   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:09:13.688567   31648 retry.go:31] will retry after 16.395640375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:09:14.185086   31648 type.go:168] "Request Body" body=""
	I1003 18:09:14.185158   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:14.185474   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:14.185528   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:14.685417   31648 type.go:168] "Request Body" body=""
	I1003 18:09:14.685504   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:14.685861   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:15.185730   31648 type.go:168] "Request Body" body=""
	I1003 18:09:15.185803   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:15.186135   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:15.685950   31648 type.go:168] "Request Body" body=""
	I1003 18:09:15.686047   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:15.686390   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:16.185313   31648 type.go:168] "Request Body" body=""
	I1003 18:09:16.185381   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:16.185711   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:16.185784   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:16.685449   31648 type.go:168] "Request Body" body=""
	I1003 18:09:16.685527   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:16.685889   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:17.185723   31648 type.go:168] "Request Body" body=""
	I1003 18:09:17.185815   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:17.186154   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:17.685963   31648 type.go:168] "Request Body" body=""
	I1003 18:09:17.686103   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:17.686430   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:18.185163   31648 type.go:168] "Request Body" body=""
	I1003 18:09:18.185228   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:18.185536   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:18.685287   31648 type.go:168] "Request Body" body=""
	I1003 18:09:18.685398   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:18.685756   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:18.685818   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:19.185602   31648 type.go:168] "Request Body" body=""
	I1003 18:09:19.185674   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:19.186025   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:19.685824   31648 type.go:168] "Request Body" body=""
	I1003 18:09:19.685902   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:19.686264   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:20.185104   31648 type.go:168] "Request Body" body=""
	I1003 18:09:20.185178   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:20.185565   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:20.685343   31648 type.go:168] "Request Body" body=""
	I1003 18:09:20.685448   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:20.685814   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:20.685865   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:21.185641   31648 type.go:168] "Request Body" body=""
	I1003 18:09:21.185717   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:21.186091   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:21.685899   31648 type.go:168] "Request Body" body=""
	I1003 18:09:21.686019   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:21.686347   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:22.185083   31648 type.go:168] "Request Body" body=""
	I1003 18:09:22.185175   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:22.185486   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:22.685245   31648 type.go:168] "Request Body" body=""
	I1003 18:09:22.685334   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:22.685730   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:23.185497   31648 type.go:168] "Request Body" body=""
	I1003 18:09:23.185562   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:23.185880   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:23.185935   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:23.685744   31648 type.go:168] "Request Body" body=""
	I1003 18:09:23.685811   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:23.686201   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:24.184964   31648 type.go:168] "Request Body" body=""
	I1003 18:09:24.185078   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:24.185397   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:24.449821   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:09:24.497529   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:09:24.499857   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:09:24.499886   31648 retry.go:31] will retry after 48.383287224s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:09:24.685468   31648 type.go:168] "Request Body" body=""
	I1003 18:09:24.685534   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:24.685867   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:25.185654   31648 type.go:168] "Request Body" body=""
	I1003 18:09:25.185748   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:25.186075   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:25.186127   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:25.685902   31648 type.go:168] "Request Body" body=""
	I1003 18:09:25.685999   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:25.686299   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:26.185018   31648 type.go:168] "Request Body" body=""
	I1003 18:09:26.185106   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:26.185414   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:26.685136   31648 type.go:168] "Request Body" body=""
	I1003 18:09:26.685216   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:26.685515   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:27.185253   31648 type.go:168] "Request Body" body=""
	I1003 18:09:27.185318   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:27.185650   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:27.685386   31648 type.go:168] "Request Body" body=""
	I1003 18:09:27.685451   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:27.685791   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:27.685845   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:28.185583   31648 type.go:168] "Request Body" body=""
	I1003 18:09:28.185675   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:28.186015   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:28.685836   31648 type.go:168] "Request Body" body=""
	I1003 18:09:28.685940   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:28.686317   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:29.185053   31648 type.go:168] "Request Body" body=""
	I1003 18:09:29.185118   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:29.185421   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:29.685145   31648 type.go:168] "Request Body" body=""
	I1003 18:09:29.685239   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:29.685545   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:30.085101   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:09:30.133826   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:09:30.136048   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:09:30.136077   31648 retry.go:31] will retry after 44.319890963s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:09:30.185379   31648 type.go:168] "Request Body" body=""
	I1003 18:09:30.185467   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:30.185752   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:30.185824   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:30.685605   31648 type.go:168] "Request Body" body=""
	I1003 18:09:30.685677   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:30.686026   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:31.185741   31648 type.go:168] "Request Body" body=""
	I1003 18:09:31.185821   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:31.186131   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:31.685990   31648 type.go:168] "Request Body" body=""
	I1003 18:09:31.686102   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:31.686418   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:32.185174   31648 type.go:168] "Request Body" body=""
	I1003 18:09:32.185268   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:32.185574   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:32.685346   31648 type.go:168] "Request Body" body=""
	I1003 18:09:32.685414   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:32.685749   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:32.685798   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:33.185523   31648 type.go:168] "Request Body" body=""
	I1003 18:09:33.185630   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:33.185973   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:33.685847   31648 type.go:168] "Request Body" body=""
	I1003 18:09:33.685917   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:33.686290   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:34.185044   31648 type.go:168] "Request Body" body=""
	I1003 18:09:34.185158   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:34.185479   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:34.685329   31648 type.go:168] "Request Body" body=""
	I1003 18:09:34.685395   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:34.685778   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:34.685850   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:35.185617   31648 type.go:168] "Request Body" body=""
	I1003 18:09:35.185711   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:35.186046   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:35.685845   31648 type.go:168] "Request Body" body=""
	I1003 18:09:35.685931   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:35.686261   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:36.184952   31648 type.go:168] "Request Body" body=""
	I1003 18:09:36.185036   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:36.185378   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:36.685083   31648 type.go:168] "Request Body" body=""
	I1003 18:09:36.685158   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:36.685526   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:37.185252   31648 type.go:168] "Request Body" body=""
	I1003 18:09:37.185333   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:37.185680   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:37.185740   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:37.685420   31648 type.go:168] "Request Body" body=""
	I1003 18:09:37.685494   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:37.685856   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:38.185680   31648 type.go:168] "Request Body" body=""
	I1003 18:09:38.185779   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:38.186105   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:38.685935   31648 type.go:168] "Request Body" body=""
	I1003 18:09:38.686035   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:38.686351   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:39.185118   31648 type.go:168] "Request Body" body=""
	I1003 18:09:39.185189   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:39.185487   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:39.685188   31648 type.go:168] "Request Body" body=""
	I1003 18:09:39.685265   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:39.685570   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:39.685631   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:40.185362   31648 type.go:168] "Request Body" body=""
	I1003 18:09:40.185457   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:40.185802   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:40.685609   31648 type.go:168] "Request Body" body=""
	I1003 18:09:40.685713   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:40.686101   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:41.186030   31648 type.go:168] "Request Body" body=""
	I1003 18:09:41.186101   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:41.186433   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:41.685075   31648 type.go:168] "Request Body" body=""
	I1003 18:09:41.685142   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:41.685469   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:42.185193   31648 type.go:168] "Request Body" body=""
	I1003 18:09:42.185257   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:42.185565   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:42.185630   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:42.685077   31648 type.go:168] "Request Body" body=""
	I1003 18:09:42.685172   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:42.685483   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:43.185219   31648 type.go:168] "Request Body" body=""
	I1003 18:09:43.185289   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:43.185605   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:43.685108   31648 type.go:168] "Request Body" body=""
	I1003 18:09:43.685175   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:43.685496   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:44.185214   31648 type.go:168] "Request Body" body=""
	I1003 18:09:44.185314   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:44.185626   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:44.185696   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:44.685443   31648 type.go:168] "Request Body" body=""
	I1003 18:09:44.685535   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:44.685860   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:45.185669   31648 type.go:168] "Request Body" body=""
	I1003 18:09:45.185734   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:45.186050   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:45.685869   31648 type.go:168] "Request Body" body=""
	I1003 18:09:45.685940   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:45.686258   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:46.184960   31648 type.go:168] "Request Body" body=""
	I1003 18:09:46.185084   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:46.185423   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:46.685149   31648 type.go:168] "Request Body" body=""
	I1003 18:09:46.685219   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:46.685543   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:46.685599   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:47.185302   31648 type.go:168] "Request Body" body=""
	I1003 18:09:47.185370   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:47.185710   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:47.685432   31648 type.go:168] "Request Body" body=""
	I1003 18:09:47.685496   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:47.685808   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:48.185599   31648 type.go:168] "Request Body" body=""
	I1003 18:09:48.185663   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:48.186043   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:48.685839   31648 type.go:168] "Request Body" body=""
	I1003 18:09:48.685931   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:48.686255   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:48.686305   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:49.185022   31648 type.go:168] "Request Body" body=""
	I1003 18:09:49.185091   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:49.185409   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:49.685097   31648 type.go:168] "Request Body" body=""
	I1003 18:09:49.685189   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:49.685510   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:50.185245   31648 type.go:168] "Request Body" body=""
	I1003 18:09:50.185317   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:50.185675   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:50.685396   31648 type.go:168] "Request Body" body=""
	I1003 18:09:50.685460   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:50.685814   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:51.185668   31648 type.go:168] "Request Body" body=""
	I1003 18:09:51.185757   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:51.186064   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:51.186116   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:51.685866   31648 type.go:168] "Request Body" body=""
	I1003 18:09:51.685934   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:51.686277   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:52.185003   31648 type.go:168] "Request Body" body=""
	I1003 18:09:52.185067   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:52.185368   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:52.685121   31648 type.go:168] "Request Body" body=""
	I1003 18:09:52.685219   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:52.685573   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:53.185280   31648 type.go:168] "Request Body" body=""
	I1003 18:09:53.185339   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:53.185633   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:53.685331   31648 type.go:168] "Request Body" body=""
	I1003 18:09:53.685395   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:53.685759   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:53.685836   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:54.185620   31648 type.go:168] "Request Body" body=""
	I1003 18:09:54.185691   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:54.186007   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:54.685714   31648 type.go:168] "Request Body" body=""
	I1003 18:09:54.685778   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:54.686135   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:55.185951   31648 type.go:168] "Request Body" body=""
	I1003 18:09:55.186058   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:55.186387   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:55.685101   31648 type.go:168] "Request Body" body=""
	I1003 18:09:55.685193   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:55.685564   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:56.185405   31648 type.go:168] "Request Body" body=""
	I1003 18:09:56.185491   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:56.185823   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:56.185874   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:56.685614   31648 type.go:168] "Request Body" body=""
	I1003 18:09:56.685702   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:56.686026   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:57.185904   31648 type.go:168] "Request Body" body=""
	I1003 18:09:57.186000   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:57.186336   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:57.685087   31648 type.go:168] "Request Body" body=""
	I1003 18:09:57.685160   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:57.685447   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:58.185160   31648 type.go:168] "Request Body" body=""
	I1003 18:09:58.185246   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:58.185558   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:58.685303   31648 type.go:168] "Request Body" body=""
	I1003 18:09:58.685365   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:58.685671   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:58.685755   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:59.185446   31648 type.go:168] "Request Body" body=""
	I1003 18:09:59.185545   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:59.185914   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:59.685737   31648 type.go:168] "Request Body" body=""
	I1003 18:09:59.685801   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:59.686146   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:00.185972   31648 type.go:168] "Request Body" body=""
	I1003 18:10:00.186075   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:00.186364   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:00.685077   31648 type.go:168] "Request Body" body=""
	I1003 18:10:00.685166   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:00.685464   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:01.185382   31648 type.go:168] "Request Body" body=""
	I1003 18:10:01.185446   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:01.185778   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:01.185830   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:01.685606   31648 type.go:168] "Request Body" body=""
	I1003 18:10:01.685677   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:01.686032   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:02.185907   31648 type.go:168] "Request Body" body=""
	I1003 18:10:02.186020   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:02.186378   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:02.685091   31648 type.go:168] "Request Body" body=""
	I1003 18:10:02.685152   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:02.685445   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:03.185142   31648 type.go:168] "Request Body" body=""
	I1003 18:10:03.185225   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:03.185561   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:03.685236   31648 type.go:168] "Request Body" body=""
	I1003 18:10:03.685339   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:03.685634   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:03.685696   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:04.185365   31648 type.go:168] "Request Body" body=""
	I1003 18:10:04.185433   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:04.185727   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:04.685562   31648 type.go:168] "Request Body" body=""
	I1003 18:10:04.685630   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:04.686027   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:05.185808   31648 type.go:168] "Request Body" body=""
	I1003 18:10:05.185875   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:05.186210   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:05.686012   31648 type.go:168] "Request Body" body=""
	I1003 18:10:05.686094   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:05.686420   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:05.686513   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:06.185220   31648 type.go:168] "Request Body" body=""
	I1003 18:10:06.185317   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:06.185670   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:06.685370   31648 type.go:168] "Request Body" body=""
	I1003 18:10:06.685434   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:06.685727   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:07.185434   31648 type.go:168] "Request Body" body=""
	I1003 18:10:07.185512   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:07.185878   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:07.685679   31648 type.go:168] "Request Body" body=""
	I1003 18:10:07.685748   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:07.686309   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:08.185067   31648 type.go:168] "Request Body" body=""
	I1003 18:10:08.185137   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:08.185459   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:08.185516   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:08.685191   31648 type.go:168] "Request Body" body=""
	I1003 18:10:08.685261   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:08.685582   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:09.185329   31648 type.go:168] "Request Body" body=""
	I1003 18:10:09.185397   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:09.185705   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:09.685441   31648 type.go:168] "Request Body" body=""
	I1003 18:10:09.685504   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:09.685840   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:10.185620   31648 type.go:168] "Request Body" body=""
	I1003 18:10:10.185689   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:10.186037   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:10.186087   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:10.685838   31648 type.go:168] "Request Body" body=""
	I1003 18:10:10.685914   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:10.686280   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:11.184954   31648 type.go:168] "Request Body" body=""
	I1003 18:10:11.185044   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:11.185353   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:11.685099   31648 type.go:168] "Request Body" body=""
	I1003 18:10:11.685168   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:11.685473   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:12.185192   31648 type.go:168] "Request Body" body=""
	I1003 18:10:12.185259   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:12.185564   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:12.685315   31648 type.go:168] "Request Body" body=""
	I1003 18:10:12.685386   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:12.685819   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:12.685875   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:12.884184   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:10:12.932382   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:10:12.934859   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:10:12.935018   31648 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1003 18:10:13.185242   31648 type.go:168] "Request Body" body=""
	I1003 18:10:13.185310   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:13.185617   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:13.685328   31648 type.go:168] "Request Body" body=""
	I1003 18:10:13.685430   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:13.685917   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:14.185730   31648 type.go:168] "Request Body" body=""
	I1003 18:10:14.185796   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:14.186122   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:14.456560   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:10:14.507486   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:10:14.509939   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:10:14.510064   31648 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1003 18:10:14.512677   31648 out.go:179] * Enabled addons: 
	I1003 18:10:14.514281   31648 addons.go:514] duration metric: took 1m59.941954445s for enable addons: enabled=[]
	I1003 18:10:14.685449   31648 type.go:168] "Request Body" body=""
	I1003 18:10:14.685516   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:14.685857   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:14.685919   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:15.185675   31648 type.go:168] "Request Body" body=""
	I1003 18:10:15.185738   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:15.186060   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:15.685871   31648 type.go:168] "Request Body" body=""
	I1003 18:10:15.685938   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:15.686263   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:16.184928   31648 type.go:168] "Request Body" body=""
	I1003 18:10:16.185033   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:16.185365   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:16.685082   31648 type.go:168] "Request Body" body=""
	I1003 18:10:16.685144   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:16.685447   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:17.185125   31648 type.go:168] "Request Body" body=""
	I1003 18:10:17.185202   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:17.185514   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:17.185563   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:17.685251   31648 type.go:168] "Request Body" body=""
	I1003 18:10:17.685320   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:17.685625   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:18.185367   31648 type.go:168] "Request Body" body=""
	I1003 18:10:18.185448   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:18.185805   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:18.685631   31648 type.go:168] "Request Body" body=""
	I1003 18:10:18.685706   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:18.686092   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:19.185904   31648 type.go:168] "Request Body" body=""
	I1003 18:10:19.185995   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:19.186318   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:19.186371   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:19.685092   31648 type.go:168] "Request Body" body=""
	I1003 18:10:19.685164   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:19.685487   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:20.185213   31648 type.go:168] "Request Body" body=""
	I1003 18:10:20.185296   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:20.185633   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:20.685398   31648 type.go:168] "Request Body" body=""
	I1003 18:10:20.685475   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:20.685780   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:21.185636   31648 type.go:168] "Request Body" body=""
	I1003 18:10:21.185711   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:21.186047   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:21.685810   31648 type.go:168] "Request Body" body=""
	I1003 18:10:21.685874   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:21.686211   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:21.686273   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:22.184932   31648 type.go:168] "Request Body" body=""
	I1003 18:10:22.185016   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:22.185357   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:22.685073   31648 type.go:168] "Request Body" body=""
	I1003 18:10:22.685138   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:22.685450   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:23.185168   31648 type.go:168] "Request Body" body=""
	I1003 18:10:23.185239   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:23.185562   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:23.685280   31648 type.go:168] "Request Body" body=""
	I1003 18:10:23.685364   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:23.685684   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:24.185432   31648 type.go:168] "Request Body" body=""
	I1003 18:10:24.185494   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:24.185826   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:24.185890   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:24.685663   31648 type.go:168] "Request Body" body=""
	I1003 18:10:24.685735   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:24.686142   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:25.185900   31648 type.go:168] "Request Body" body=""
	I1003 18:10:25.185964   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:25.186274   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:25.685013   31648 type.go:168] "Request Body" body=""
	I1003 18:10:25.685093   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:25.685422   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:26.185213   31648 type.go:168] "Request Body" body=""
	I1003 18:10:26.185323   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:26.185654   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:26.685413   31648 type.go:168] "Request Body" body=""
	I1003 18:10:26.685482   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:26.685843   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:26.685908   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:27.185654   31648 type.go:168] "Request Body" body=""
	I1003 18:10:27.185733   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:27.186080   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:27.685901   31648 type.go:168] "Request Body" body=""
	I1003 18:10:27.685968   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:27.686301   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:28.185042   31648 type.go:168] "Request Body" body=""
	I1003 18:10:28.185109   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:28.185417   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:28.685129   31648 type.go:168] "Request Body" body=""
	I1003 18:10:28.685212   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:28.685544   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:29.185277   31648 type.go:168] "Request Body" body=""
	I1003 18:10:29.185350   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:29.185667   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:29.185717   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:29.685390   31648 type.go:168] "Request Body" body=""
	I1003 18:10:29.685463   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:29.685809   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:30.185653   31648 type.go:168] "Request Body" body=""
	I1003 18:10:30.185740   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:30.186077   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:30.685885   31648 type.go:168] "Request Body" body=""
	I1003 18:10:30.685950   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:30.686302   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:31.184960   31648 type.go:168] "Request Body" body=""
	I1003 18:10:31.185039   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:31.185351   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:31.685088   31648 type.go:168] "Request Body" body=""
	I1003 18:10:31.685183   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:31.685491   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:31.685553   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:32.185245   31648 type.go:168] "Request Body" body=""
	I1003 18:10:32.185311   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:32.185616   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:32.685334   31648 type.go:168] "Request Body" body=""
	I1003 18:10:32.685427   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:32.685753   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:33.185521   31648 type.go:168] "Request Body" body=""
	I1003 18:10:33.185585   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:33.185951   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:33.685776   31648 type.go:168] "Request Body" body=""
	I1003 18:10:33.685843   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:33.686164   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:33.686226   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:34.186008   31648 type.go:168] "Request Body" body=""
	I1003 18:10:34.186076   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:34.186390   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:34.685080   31648 type.go:168] "Request Body" body=""
	I1003 18:10:34.685151   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:34.685468   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:35.185199   31648 type.go:168] "Request Body" body=""
	I1003 18:10:35.185274   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:35.185624   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:35.685334   31648 type.go:168] "Request Body" body=""
	I1003 18:10:35.685407   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:35.685728   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:36.185543   31648 type.go:168] "Request Body" body=""
	I1003 18:10:36.185617   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:36.185950   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:36.186025   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:36.685767   31648 type.go:168] "Request Body" body=""
	I1003 18:10:36.685830   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:36.686160   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:37.185965   31648 type.go:168] "Request Body" body=""
	I1003 18:10:37.186062   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:37.186419   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:37.685168   31648 type.go:168] "Request Body" body=""
	I1003 18:10:37.685233   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:37.685563   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:38.185271   31648 type.go:168] "Request Body" body=""
	I1003 18:10:38.185345   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:38.185657   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:38.685369   31648 type.go:168] "Request Body" body=""
	I1003 18:10:38.685433   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:38.685746   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:38.685800   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:39.185560   31648 type.go:168] "Request Body" body=""
	I1003 18:10:39.185640   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:39.185997   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:39.685784   31648 type.go:168] "Request Body" body=""
	I1003 18:10:39.685851   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:39.686184   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:40.185949   31648 type.go:168] "Request Body" body=""
	I1003 18:10:40.186046   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:40.186401   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:40.685071   31648 type.go:168] "Request Body" body=""
	I1003 18:10:40.685152   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:40.685459   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:41.185267   31648 type.go:168] "Request Body" body=""
	I1003 18:10:41.185334   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:41.185637   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:41.185700   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:41.685380   31648 type.go:168] "Request Body" body=""
	I1003 18:10:41.685445   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:41.685830   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:42.185632   31648 type.go:168] "Request Body" body=""
	I1003 18:10:42.185724   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:42.186063   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:42.685859   31648 type.go:168] "Request Body" body=""
	I1003 18:10:42.685933   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:42.686273   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:43.185018   31648 type.go:168] "Request Body" body=""
	I1003 18:10:43.185089   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:43.185411   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:43.685086   31648 type.go:168] "Request Body" body=""
	I1003 18:10:43.685152   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:43.685478   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:43.685542   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:44.185259   31648 type.go:168] "Request Body" body=""
	I1003 18:10:44.185327   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:44.185679   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:44.685473   31648 type.go:168] "Request Body" body=""
	I1003 18:10:44.685537   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:44.685872   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:45.185684   31648 type.go:168] "Request Body" body=""
	I1003 18:10:45.185759   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:45.186086   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:45.685880   31648 type.go:168] "Request Body" body=""
	I1003 18:10:45.685945   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:45.686284   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:45.686349   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:46.184919   31648 type.go:168] "Request Body" body=""
	I1003 18:10:46.185021   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:46.185345   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:46.685069   31648 type.go:168] "Request Body" body=""
	I1003 18:10:46.685147   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:46.685496   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:47.185204   31648 type.go:168] "Request Body" body=""
	I1003 18:10:47.185304   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:47.185613   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:47.685395   31648 type.go:168] "Request Body" body=""
	I1003 18:10:47.685473   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:47.685844   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:48.185624   31648 type.go:168] "Request Body" body=""
	I1003 18:10:48.185707   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:48.186048   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:48.186105   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:48.685861   31648 type.go:168] "Request Body" body=""
	I1003 18:10:48.685948   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:48.686324   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:49.185066   31648 type.go:168] "Request Body" body=""
	I1003 18:10:49.185176   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:49.185503   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:49.685237   31648 type.go:168] "Request Body" body=""
	I1003 18:10:49.685317   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:49.685703   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:50.185462   31648 type.go:168] "Request Body" body=""
	I1003 18:10:50.185540   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:50.185875   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:50.685684   31648 type.go:168] "Request Body" body=""
	I1003 18:10:50.685764   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:50.686154   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:50.686209   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:51.185959   31648 type.go:168] "Request Body" body=""
	I1003 18:10:51.186061   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:51.186411   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:51.685154   31648 type.go:168] "Request Body" body=""
	I1003 18:10:51.685222   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:51.685506   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:52.185254   31648 type.go:168] "Request Body" body=""
	I1003 18:10:52.185335   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:52.185690   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:52.685398   31648 type.go:168] "Request Body" body=""
	I1003 18:10:52.685466   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:52.685770   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:53.185621   31648 type.go:168] "Request Body" body=""
	I1003 18:10:53.185692   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:53.186039   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:53.186109   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:53.685850   31648 type.go:168] "Request Body" body=""
	I1003 18:10:53.685914   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:53.686255   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:54.185017   31648 type.go:168] "Request Body" body=""
	I1003 18:10:54.185080   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:54.185397   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:54.685078   31648 type.go:168] "Request Body" body=""
	I1003 18:10:54.685145   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:54.685459   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:55.185159   31648 type.go:168] "Request Body" body=""
	I1003 18:10:55.185227   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:55.185528   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:55.685211   31648 type.go:168] "Request Body" body=""
	I1003 18:10:55.685279   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:55.685586   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:55.685652   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:56.185352   31648 type.go:168] "Request Body" body=""
	I1003 18:10:56.185430   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:56.185759   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:56.685531   31648 type.go:168] "Request Body" body=""
	I1003 18:10:56.685600   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:56.685922   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:57.185723   31648 type.go:168] "Request Body" body=""
	I1003 18:10:57.185811   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:57.186156   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:57.685922   31648 type.go:168] "Request Body" body=""
	I1003 18:10:57.686010   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:57.686316   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:57.686367   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:58.185097   31648 type.go:168] "Request Body" body=""
	I1003 18:10:58.185187   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:58.185535   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:58.685089   31648 type.go:168] "Request Body" body=""
	I1003 18:10:58.685158   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:58.685458   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:59.185180   31648 type.go:168] "Request Body" body=""
	I1003 18:10:59.185260   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:59.185605   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:59.685329   31648 type.go:168] "Request Body" body=""
	I1003 18:10:59.685409   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:59.685768   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:00.185577   31648 type.go:168] "Request Body" body=""
	I1003 18:11:00.185644   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:00.185968   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:00.186053   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:00.685767   31648 type.go:168] "Request Body" body=""
	I1003 18:11:00.685853   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:00.686208   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:01.185912   31648 type.go:168] "Request Body" body=""
	I1003 18:11:01.186001   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:01.186311   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:01.685060   31648 type.go:168] "Request Body" body=""
	I1003 18:11:01.685173   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:01.685511   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:02.185272   31648 type.go:168] "Request Body" body=""
	I1003 18:11:02.185343   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:02.185674   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:02.685366   31648 type.go:168] "Request Body" body=""
	I1003 18:11:02.685447   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:02.685807   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:02.685860   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:03.185586   31648 type.go:168] "Request Body" body=""
	I1003 18:11:03.185653   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:03.186010   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:03.685810   31648 type.go:168] "Request Body" body=""
	I1003 18:11:03.685892   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:03.686241   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:04.184939   31648 type.go:168] "Request Body" body=""
	I1003 18:11:04.185023   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:04.185312   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:04.685060   31648 type.go:168] "Request Body" body=""
	I1003 18:11:04.685143   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:04.685467   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:05.185189   31648 type.go:168] "Request Body" body=""
	I1003 18:11:05.185258   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:05.185567   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:05.185625   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:05.685299   31648 type.go:168] "Request Body" body=""
	I1003 18:11:05.685378   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:05.685703   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:06.185511   31648 type.go:168] "Request Body" body=""
	I1003 18:11:06.185600   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:06.185915   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:06.685750   31648 type.go:168] "Request Body" body=""
	I1003 18:11:06.685834   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:06.686186   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:07.185989   31648 type.go:168] "Request Body" body=""
	I1003 18:11:07.186058   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:07.186369   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:07.186436   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:07.685126   31648 type.go:168] "Request Body" body=""
	I1003 18:11:07.685203   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:07.685514   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:08.185223   31648 type.go:168] "Request Body" body=""
	I1003 18:11:08.185315   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:08.185627   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:08.685356   31648 type.go:168] "Request Body" body=""
	I1003 18:11:08.685469   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:08.685819   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:09.185588   31648 type.go:168] "Request Body" body=""
	I1003 18:11:09.185655   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:09.186048   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:09.685858   31648 type.go:168] "Request Body" body=""
	I1003 18:11:09.685945   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:09.686291   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:09.686344   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:10.185028   31648 type.go:168] "Request Body" body=""
	I1003 18:11:10.185112   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:10.185419   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:10.685125   31648 type.go:168] "Request Body" body=""
	I1003 18:11:10.685235   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:10.685580   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:11.185333   31648 type.go:168] "Request Body" body=""
	I1003 18:11:11.185400   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:11.185721   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:11.685427   31648 type.go:168] "Request Body" body=""
	I1003 18:11:11.685540   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:11.685876   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:12.185659   31648 type.go:168] "Request Body" body=""
	I1003 18:11:12.185756   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:12.186078   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:12.186142   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:12.685887   31648 type.go:168] "Request Body" body=""
	I1003 18:11:12.685959   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:12.686282   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:13.185003   31648 type.go:168] "Request Body" body=""
	I1003 18:11:13.185081   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:13.185409   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:13.685094   31648 type.go:168] "Request Body" body=""
	I1003 18:11:13.685164   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:13.685478   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:14.185184   31648 type.go:168] "Request Body" body=""
	I1003 18:11:14.185260   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:14.185598   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:14.685408   31648 type.go:168] "Request Body" body=""
	I1003 18:11:14.685477   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:14.685794   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:14.685865   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:15.185614   31648 type.go:168] "Request Body" body=""
	I1003 18:11:15.185690   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:15.186097   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:15.685915   31648 type.go:168] "Request Body" body=""
	I1003 18:11:15.686020   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:15.686331   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:16.185164   31648 type.go:168] "Request Body" body=""
	I1003 18:11:16.185233   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:16.185540   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:16.685230   31648 type.go:168] "Request Body" body=""
	I1003 18:11:16.685290   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:16.685601   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:17.185312   31648 type.go:168] "Request Body" body=""
	I1003 18:11:17.185380   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:17.185697   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:17.185779   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:17.685436   31648 type.go:168] "Request Body" body=""
	I1003 18:11:17.685502   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:17.685845   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:18.185654   31648 type.go:168] "Request Body" body=""
	I1003 18:11:18.185717   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:18.186072   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:18.685861   31648 type.go:168] "Request Body" body=""
	I1003 18:11:18.685924   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:18.686240   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:19.185000   31648 type.go:168] "Request Body" body=""
	I1003 18:11:19.185076   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:19.185392   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:19.685130   31648 type.go:168] "Request Body" body=""
	I1003 18:11:19.685199   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:19.685540   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:19.685603   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:20.185304   31648 type.go:168] "Request Body" body=""
	I1003 18:11:20.185368   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:20.185692   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:20.685437   31648 type.go:168] "Request Body" body=""
	I1003 18:11:20.685512   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:20.685889   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:21.185654   31648 type.go:168] "Request Body" body=""
	I1003 18:11:21.185736   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:21.186088   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:21.685864   31648 type.go:168] "Request Body" body=""
	I1003 18:11:21.685950   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:21.686257   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:21.686310   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:22.185029   31648 type.go:168] "Request Body" body=""
	I1003 18:11:22.185128   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:22.185448   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:22.685177   31648 type.go:168] "Request Body" body=""
	I1003 18:11:22.685257   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:22.685561   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:23.185277   31648 type.go:168] "Request Body" body=""
	I1003 18:11:23.185353   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:23.185666   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:23.685362   31648 type.go:168] "Request Body" body=""
	I1003 18:11:23.685435   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:23.685751   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:24.185475   31648 type.go:168] "Request Body" body=""
	I1003 18:11:24.185552   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:24.185910   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:24.185963   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:24.685584   31648 type.go:168] "Request Body" body=""
	I1003 18:11:24.685659   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:24.685971   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:25.185758   31648 type.go:168] "Request Body" body=""
	I1003 18:11:25.185842   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:25.186204   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:25.685956   31648 type.go:168] "Request Body" body=""
	I1003 18:11:25.686040   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:25.686348   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:26.185071   31648 type.go:168] "Request Body" body=""
	I1003 18:11:26.185144   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:26.185483   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:26.685189   31648 type.go:168] "Request Body" body=""
	I1003 18:11:26.685255   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:26.685555   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:26.685624   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:27.185293   31648 type.go:168] "Request Body" body=""
	I1003 18:11:27.185364   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:27.185670   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:27.685353   31648 type.go:168] "Request Body" body=""
	I1003 18:11:27.685417   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:27.685713   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:28.185462   31648 type.go:168] "Request Body" body=""
	I1003 18:11:28.185529   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:28.185838   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:28.685636   31648 type.go:168] "Request Body" body=""
	I1003 18:11:28.685711   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:28.686033   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:28.686095   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:29.185891   31648 type.go:168] "Request Body" body=""
	I1003 18:11:29.185959   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:29.186289   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:29.684999   31648 type.go:168] "Request Body" body=""
	I1003 18:11:29.685063   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:29.685358   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:30.185079   31648 type.go:168] "Request Body" body=""
	I1003 18:11:30.185147   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:30.185448   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:30.685153   31648 type.go:168] "Request Body" body=""
	I1003 18:11:30.685224   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:30.685542   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:31.185387   31648 type.go:168] "Request Body" body=""
	I1003 18:11:31.185470   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:31.185801   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:31.185869   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:31.685601   31648 type.go:168] "Request Body" body=""
	I1003 18:11:31.685665   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:31.686013   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:32.185823   31648 type.go:168] "Request Body" body=""
	I1003 18:11:32.185918   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:32.186314   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:32.685025   31648 type.go:168] "Request Body" body=""
	I1003 18:11:32.685090   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:32.685396   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:33.185093   31648 type.go:168] "Request Body" body=""
	I1003 18:11:33.185177   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:33.185492   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:33.685174   31648 type.go:168] "Request Body" body=""
	I1003 18:11:33.685294   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:33.685598   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:33.685653   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:34.185347   31648 type.go:168] "Request Body" body=""
	I1003 18:11:34.185424   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:34.185757   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:34.685584   31648 type.go:168] "Request Body" body=""
	I1003 18:11:34.685700   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:34.686040   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:35.185805   31648 type.go:168] "Request Body" body=""
	I1003 18:11:35.185867   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:35.186199   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:35.685954   31648 type.go:168] "Request Body" body=""
	I1003 18:11:35.686050   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:35.686359   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:35.686411   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:36.185172   31648 type.go:168] "Request Body" body=""
	I1003 18:11:36.185238   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:36.185535   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:36.685215   31648 type.go:168] "Request Body" body=""
	I1003 18:11:36.685302   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:36.685612   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:37.185339   31648 type.go:168] "Request Body" body=""
	I1003 18:11:37.185403   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:37.185728   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:37.685401   31648 type.go:168] "Request Body" body=""
	I1003 18:11:37.685477   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:37.685800   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:38.185642   31648 type.go:168] "Request Body" body=""
	I1003 18:11:38.185720   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:38.186056   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:38.186115   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:38.685846   31648 type.go:168] "Request Body" body=""
	I1003 18:11:38.685908   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:38.686230   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:39.184965   31648 type.go:168] "Request Body" body=""
	I1003 18:11:39.185068   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:39.185389   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:39.685076   31648 type.go:168] "Request Body" body=""
	I1003 18:11:39.685138   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:39.685429   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:40.185151   31648 type.go:168] "Request Body" body=""
	I1003 18:11:40.185227   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:40.185552   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:40.685234   31648 type.go:168] "Request Body" body=""
	I1003 18:11:40.685299   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:40.685612   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:40.685679   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:41.185407   31648 type.go:168] "Request Body" body=""
	I1003 18:11:41.185475   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:41.185810   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:41.685588   31648 type.go:168] "Request Body" body=""
	I1003 18:11:41.685663   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:41.685999   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:42.185821   31648 type.go:168] "Request Body" body=""
	I1003 18:11:42.185909   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:42.186287   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:42.685035   31648 type.go:168] "Request Body" body=""
	I1003 18:11:42.685109   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:42.685460   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:43.185163   31648 type.go:168] "Request Body" body=""
	I1003 18:11:43.185226   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:43.185569   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:43.185640   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:43.685320   31648 type.go:168] "Request Body" body=""
	I1003 18:11:43.685387   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:43.685687   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:44.185376   31648 type.go:168] "Request Body" body=""
	I1003 18:11:44.185445   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:44.185795   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:44.685599   31648 type.go:168] "Request Body" body=""
	I1003 18:11:44.685672   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:44.686013   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:45.185797   31648 type.go:168] "Request Body" body=""
	I1003 18:11:45.185863   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:45.186210   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:45.186272   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:45.684943   31648 type.go:168] "Request Body" body=""
	I1003 18:11:45.685023   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:45.685323   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:46.184972   31648 type.go:168] "Request Body" body=""
	I1003 18:11:46.185063   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:46.185368   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:46.685078   31648 type.go:168] "Request Body" body=""
	I1003 18:11:46.685143   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:46.685436   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:47.185171   31648 type.go:168] "Request Body" body=""
	I1003 18:11:47.185237   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:47.185530   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:47.685229   31648 type.go:168] "Request Body" body=""
	I1003 18:11:47.685292   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:47.685573   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:47.685625   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:48.185308   31648 type.go:168] "Request Body" body=""
	I1003 18:11:48.185378   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:48.185726   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:48.685435   31648 type.go:168] "Request Body" body=""
	I1003 18:11:48.685502   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:48.685818   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:49.185572   31648 type.go:168] "Request Body" body=""
	I1003 18:11:49.185639   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:49.185951   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:49.685755   31648 type.go:168] "Request Body" body=""
	I1003 18:11:49.685820   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:49.686165   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:49.686226   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:50.185972   31648 type.go:168] "Request Body" body=""
	I1003 18:11:50.186049   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:50.186347   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:50.685077   31648 type.go:168] "Request Body" body=""
	I1003 18:11:50.685149   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:50.685487   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:51.185355   31648 type.go:168] "Request Body" body=""
	I1003 18:11:51.185423   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:51.185749   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:51.685438   31648 type.go:168] "Request Body" body=""
	I1003 18:11:51.685502   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:51.685808   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:52.185581   31648 type.go:168] "Request Body" body=""
	I1003 18:11:52.185644   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:52.185967   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:52.186043   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:52.685763   31648 type.go:168] "Request Body" body=""
	I1003 18:11:52.685866   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:52.686218   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:53.184953   31648 type.go:168] "Request Body" body=""
	I1003 18:11:53.185051   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:53.185365   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:53.685069   31648 type.go:168] "Request Body" body=""
	I1003 18:11:53.685143   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:53.685457   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:54.185161   31648 type.go:168] "Request Body" body=""
	I1003 18:11:54.185226   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:54.185562   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:54.685310   31648 type.go:168] "Request Body" body=""
	I1003 18:11:54.685387   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:54.685726   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:54.685776   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:55.185417   31648 type.go:168] "Request Body" body=""
	I1003 18:11:55.185483   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:55.185815   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:55.685573   31648 type.go:168] "Request Body" body=""
	I1003 18:11:55.685677   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:55.686027   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:56.185731   31648 type.go:168] "Request Body" body=""
	I1003 18:11:56.185792   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:56.186116   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:56.685906   31648 type.go:168] "Request Body" body=""
	I1003 18:11:56.686004   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:56.686321   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:56.686379   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:57.185067   31648 type.go:168] "Request Body" body=""
	I1003 18:11:57.185134   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:57.185426   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:57.685144   31648 type.go:168] "Request Body" body=""
	I1003 18:11:57.685226   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:57.685539   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:58.185226   31648 type.go:168] "Request Body" body=""
	I1003 18:11:58.185291   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:58.185597   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:58.685288   31648 type.go:168] "Request Body" body=""
	I1003 18:11:58.685373   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:58.685689   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:59.185369   31648 type.go:168] "Request Body" body=""
	I1003 18:11:59.185441   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:59.185768   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:59.185831   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:59.685575   31648 type.go:168] "Request Body" body=""
	I1003 18:11:59.685674   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:59.686024   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:00.185851   31648 type.go:168] "Request Body" body=""
	I1003 18:12:00.185922   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:00.186234   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:00.684953   31648 type.go:168] "Request Body" body=""
	I1003 18:12:00.685062   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:00.685403   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:01.185179   31648 type.go:168] "Request Body" body=""
	I1003 18:12:01.185248   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:01.185572   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:01.685293   31648 type.go:168] "Request Body" body=""
	I1003 18:12:01.685376   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:01.685710   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:01.685766   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:02.185411   31648 type.go:168] "Request Body" body=""
	I1003 18:12:02.185478   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:02.185826   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:02.685596   31648 type.go:168] "Request Body" body=""
	I1003 18:12:02.685688   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:02.686031   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:03.185821   31648 type.go:168] "Request Body" body=""
	I1003 18:12:03.185887   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:03.186235   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:03.684941   31648 type.go:168] "Request Body" body=""
	I1003 18:12:03.685043   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:03.685366   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:04.185065   31648 type.go:168] "Request Body" body=""
	I1003 18:12:04.185133   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:04.185448   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:04.185500   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:04.685256   31648 type.go:168] "Request Body" body=""
	I1003 18:12:04.685332   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:04.685650   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:05.185329   31648 type.go:168] "Request Body" body=""
	I1003 18:12:05.185398   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:05.185718   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:05.685410   31648 type.go:168] "Request Body" body=""
	I1003 18:12:05.685475   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:05.685794   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:06.185563   31648 type.go:168] "Request Body" body=""
	I1003 18:12:06.185632   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:06.185948   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:06.186035   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:06.685752   31648 type.go:168] "Request Body" body=""
	I1003 18:12:06.685824   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:06.686177   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:07.185942   31648 type.go:168] "Request Body" body=""
	I1003 18:12:07.186020   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:07.186318   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:07.685031   31648 type.go:168] "Request Body" body=""
	I1003 18:12:07.685100   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:07.685424   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:08.185310   31648 type.go:168] "Request Body" body=""
	I1003 18:12:08.185557   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:08.186174   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:08.186246   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:08.685021   31648 type.go:168] "Request Body" body=""
	I1003 18:12:08.685163   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:08.685624   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:09.185153   31648 type.go:168] "Request Body" body=""
	I1003 18:12:09.185228   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:09.185529   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:09.685080   31648 type.go:168] "Request Body" body=""
	I1003 18:12:09.685150   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:09.685445   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:10.185696   31648 type.go:168] "Request Body" body=""
	I1003 18:12:10.185761   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:10.186171   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:10.685822   31648 type.go:168] "Request Body" body=""
	I1003 18:12:10.685891   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:10.686201   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:10.686266   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:11.184920   31648 type.go:168] "Request Body" body=""
	I1003 18:12:11.185025   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:11.185378   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:11.684920   31648 type.go:168] "Request Body" body=""
	I1003 18:12:11.685033   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:11.685353   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:12.186032   31648 type.go:168] "Request Body" body=""
	I1003 18:12:12.186096   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:12.186405   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:12.685015   31648 type.go:168] "Request Body" body=""
	I1003 18:12:12.685091   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:12.685409   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:13.185019   31648 type.go:168] "Request Body" body=""
	I1003 18:12:13.185093   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:13.185404   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:13.185456   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:13.685017   31648 type.go:168] "Request Body" body=""
	I1003 18:12:13.685098   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:13.685420   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:14.185003   31648 type.go:168] "Request Body" body=""
	I1003 18:12:14.185073   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:14.185375   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:14.685353   31648 type.go:168] "Request Body" body=""
	I1003 18:12:14.685425   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:14.685732   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:15.185329   31648 type.go:168] "Request Body" body=""
	I1003 18:12:15.185393   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:15.185699   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:15.185756   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:15.685287   31648 type.go:168] "Request Body" body=""
	I1003 18:12:15.685366   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:15.685696   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:16.185545   31648 type.go:168] "Request Body" body=""
	I1003 18:12:16.185614   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:16.185938   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:16.685555   31648 type.go:168] "Request Body" body=""
	I1003 18:12:16.685672   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:16.686031   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:17.185708   31648 type.go:168] "Request Body" body=""
	I1003 18:12:17.185775   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:17.186072   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:17.186122   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:17.685745   31648 type.go:168] "Request Body" body=""
	I1003 18:12:17.685826   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:17.686169   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:18.185895   31648 type.go:168] "Request Body" body=""
	I1003 18:12:18.185966   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:18.186347   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:18.685985   31648 type.go:168] "Request Body" body=""
	I1003 18:12:18.686065   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:18.686377   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:19.185028   31648 type.go:168] "Request Body" body=""
	I1003 18:12:19.185094   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:19.185404   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:19.684993   31648 type.go:168] "Request Body" body=""
	I1003 18:12:19.685067   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:19.685365   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:19.685419   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:20.184966   31648 type.go:168] "Request Body" body=""
	I1003 18:12:20.185059   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:20.185369   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:20.684968   31648 type.go:168] "Request Body" body=""
	I1003 18:12:20.685064   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:20.685377   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:21.185199   31648 type.go:168] "Request Body" body=""
	I1003 18:12:21.185268   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:21.185584   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:21.685182   31648 type.go:168] "Request Body" body=""
	I1003 18:12:21.685270   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:21.685589   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:21.685651   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:22.185158   31648 type.go:168] "Request Body" body=""
	I1003 18:12:22.185226   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:22.185552   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:22.685092   31648 type.go:168] "Request Body" body=""
	I1003 18:12:22.685168   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:22.685483   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:23.185069   31648 type.go:168] "Request Body" body=""
	I1003 18:12:23.185132   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:23.185442   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:23.685074   31648 type.go:168] "Request Body" body=""
	I1003 18:12:23.685147   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:23.685472   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:24.185079   31648 type.go:168] "Request Body" body=""
	I1003 18:12:24.185152   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:24.185468   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:24.185523   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:24.685267   31648 type.go:168] "Request Body" body=""
	I1003 18:12:24.685328   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:24.685633   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:25.185201   31648 type.go:168] "Request Body" body=""
	I1003 18:12:25.185267   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:25.185577   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:25.685147   31648 type.go:168] "Request Body" body=""
	I1003 18:12:25.685221   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:25.685537   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:26.185376   31648 type.go:168] "Request Body" body=""
	I1003 18:12:26.185445   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:26.185763   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:26.185815   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:26.685320   31648 type.go:168] "Request Body" body=""
	I1003 18:12:26.685398   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:26.685732   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:27.185386   31648 type.go:168] "Request Body" body=""
	I1003 18:12:27.185456   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:27.185774   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:27.685332   31648 type.go:168] "Request Body" body=""
	I1003 18:12:27.685409   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:27.685755   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:28.185323   31648 type.go:168] "Request Body" body=""
	I1003 18:12:28.185387   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:28.185709   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:28.685266   31648 type.go:168] "Request Body" body=""
	I1003 18:12:28.685343   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:28.685731   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:28.685797   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:29.185293   31648 type.go:168] "Request Body" body=""
	I1003 18:12:29.185362   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:29.185681   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:29.685253   31648 type.go:168] "Request Body" body=""
	I1003 18:12:29.685341   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:29.685670   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:30.185273   31648 type.go:168] "Request Body" body=""
	I1003 18:12:30.185336   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:30.185638   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:30.685205   31648 type.go:168] "Request Body" body=""
	I1003 18:12:30.685285   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:30.685586   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:31.185396   31648 type.go:168] "Request Body" body=""
	I1003 18:12:31.185471   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:31.185833   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:31.185890   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:31.685435   31648 type.go:168] "Request Body" body=""
	I1003 18:12:31.685517   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:31.685844   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:32.185392   31648 type.go:168] "Request Body" body=""
	I1003 18:12:32.185458   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:32.185764   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:32.685377   31648 type.go:168] "Request Body" body=""
	I1003 18:12:32.685464   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:32.685795   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:33.185359   31648 type.go:168] "Request Body" body=""
	I1003 18:12:33.185426   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:33.185740   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:33.685326   31648 type.go:168] "Request Body" body=""
	I1003 18:12:33.685407   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:33.685749   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:33.685805   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:34.185324   31648 type.go:168] "Request Body" body=""
	I1003 18:12:34.185391   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:34.185798   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:34.685697   31648 type.go:168] "Request Body" body=""
	I1003 18:12:34.685778   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:34.686147   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:35.185833   31648 type.go:168] "Request Body" body=""
	I1003 18:12:35.185908   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:35.186230   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:35.685876   31648 type.go:168] "Request Body" body=""
	I1003 18:12:35.685957   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:35.686342   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:35.686404   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:36.185025   31648 type.go:168] "Request Body" body=""
	I1003 18:12:36.185106   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:36.185455   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:36.685049   31648 type.go:168] "Request Body" body=""
	I1003 18:12:36.685129   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:36.685448   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:37.185016   31648 type.go:168] "Request Body" body=""
	I1003 18:12:37.185089   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:37.185408   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:37.685018   31648 type.go:168] "Request Body" body=""
	I1003 18:12:37.685090   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:37.685418   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:38.184968   31648 type.go:168] "Request Body" body=""
	I1003 18:12:38.185058   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:38.185369   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:38.185426   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:38.684922   31648 type.go:168] "Request Body" body=""
	I1003 18:12:38.685020   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:38.685336   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:39.186015   31648 type.go:168] "Request Body" body=""
	I1003 18:12:39.186082   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:39.186391   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:39.684964   31648 type.go:168] "Request Body" body=""
	I1003 18:12:39.685064   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:39.685384   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:40.185016   31648 type.go:168] "Request Body" body=""
	I1003 18:12:40.185081   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:40.185399   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:40.185451   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:40.685023   31648 type.go:168] "Request Body" body=""
	I1003 18:12:40.685100   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:40.685415   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:41.185286   31648 type.go:168] "Request Body" body=""
	I1003 18:12:41.185356   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:41.185704   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:41.685271   31648 type.go:168] "Request Body" body=""
	I1003 18:12:41.685345   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:41.685676   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:42.185232   31648 type.go:168] "Request Body" body=""
	I1003 18:12:42.185297   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:42.185603   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:42.185677   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:42.685166   31648 type.go:168] "Request Body" body=""
	I1003 18:12:42.685261   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:42.685582   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:43.185142   31648 type.go:168] "Request Body" body=""
	I1003 18:12:43.185210   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:43.185530   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:43.685335   31648 type.go:168] "Request Body" body=""
	I1003 18:12:43.685517   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:43.686011   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:44.185546   31648 type.go:168] "Request Body" body=""
	I1003 18:12:44.185637   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:44.185952   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:44.186027   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:44.685689   31648 type.go:168] "Request Body" body=""
	I1003 18:12:44.685790   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:44.686111   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:45.185834   31648 type.go:168] "Request Body" body=""
	I1003 18:12:45.185923   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:45.186247   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:45.685720   31648 type.go:168] "Request Body" body=""
	I1003 18:12:45.685788   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:45.686128   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:46.185754   31648 type.go:168] "Request Body" body=""
	I1003 18:12:46.185839   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:46.186221   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:46.186277   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:46.685820   31648 type.go:168] "Request Body" body=""
	I1003 18:12:46.685886   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:46.686208   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:47.185851   31648 type.go:168] "Request Body" body=""
	I1003 18:12:47.185923   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:47.186245   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:47.685882   31648 type.go:168] "Request Body" body=""
	I1003 18:12:47.685947   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:47.686262   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:48.185908   31648 type.go:168] "Request Body" body=""
	I1003 18:12:48.185999   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:48.186381   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:48.186430   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:48.686002   31648 type.go:168] "Request Body" body=""
	I1003 18:12:48.686088   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:48.686447   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:49.185029   31648 type.go:168] "Request Body" body=""
	I1003 18:12:49.185102   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:49.185407   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:49.685003   31648 type.go:168] "Request Body" body=""
	I1003 18:12:49.685079   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:49.685399   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:50.184995   31648 type.go:168] "Request Body" body=""
	I1003 18:12:50.185063   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:50.185376   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:50.685005   31648 type.go:168] "Request Body" body=""
	I1003 18:12:50.685086   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:50.685402   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:50.685457   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:51.185264   31648 type.go:168] "Request Body" body=""
	I1003 18:12:51.185331   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:51.185656   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:51.685186   31648 type.go:168] "Request Body" body=""
	I1003 18:12:51.685261   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:51.685581   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:52.185171   31648 type.go:168] "Request Body" body=""
	I1003 18:12:52.185246   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:52.185567   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:52.685150   31648 type.go:168] "Request Body" body=""
	I1003 18:12:52.685238   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:52.685565   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:52.685619   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:53.185114   31648 type.go:168] "Request Body" body=""
	I1003 18:12:53.185178   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:53.185492   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:53.685072   31648 type.go:168] "Request Body" body=""
	I1003 18:12:53.685148   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:53.685473   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:54.185075   31648 type.go:168] "Request Body" body=""
	I1003 18:12:54.185146   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:54.185455   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:54.685278   31648 type.go:168] "Request Body" body=""
	I1003 18:12:54.685361   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:54.685694   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:54.685749   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:55.185253   31648 type.go:168] "Request Body" body=""
	I1003 18:12:55.185324   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:55.185627   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:55.685205   31648 type.go:168] "Request Body" body=""
	I1003 18:12:55.685291   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:55.685628   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:56.185471   31648 type.go:168] "Request Body" body=""
	I1003 18:12:56.185542   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:56.185859   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:56.685418   31648 type.go:168] "Request Body" body=""
	I1003 18:12:56.685501   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:56.685842   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:56.685903   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:57.185408   31648 type.go:168] "Request Body" body=""
	I1003 18:12:57.185483   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:57.185825   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:57.685392   31648 type.go:168] "Request Body" body=""
	I1003 18:12:57.685471   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:57.685812   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:58.185364   31648 type.go:168] "Request Body" body=""
	I1003 18:12:58.185431   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:58.185736   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:58.685296   31648 type.go:168] "Request Body" body=""
	I1003 18:12:58.685379   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:58.685735   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:59.185312   31648 type.go:168] "Request Body" body=""
	I1003 18:12:59.185381   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:59.185710   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:59.185769   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:59.685328   31648 type.go:168] "Request Body" body=""
	I1003 18:12:59.685404   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:59.685769   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:00.185320   31648 type.go:168] "Request Body" body=""
	I1003 18:13:00.185386   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:00.185713   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:00.685362   31648 type.go:168] "Request Body" body=""
	I1003 18:13:00.685457   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:00.685823   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:01.185697   31648 type.go:168] "Request Body" body=""
	I1003 18:13:01.185765   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:01.186114   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:01.186172   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:01.685762   31648 type.go:168] "Request Body" body=""
	I1003 18:13:01.685852   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:01.686240   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:02.185865   31648 type.go:168] "Request Body" body=""
	I1003 18:13:02.185951   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:02.186283   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:02.685917   31648 type.go:168] "Request Body" body=""
	I1003 18:13:02.686014   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:02.686332   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:03.185942   31648 type.go:168] "Request Body" body=""
	I1003 18:13:03.186032   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:03.186345   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:03.186397   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:03.684942   31648 type.go:168] "Request Body" body=""
	I1003 18:13:03.685055   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:03.685383   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:04.184939   31648 type.go:168] "Request Body" body=""
	I1003 18:13:04.185041   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:04.185351   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:04.685279   31648 type.go:168] "Request Body" body=""
	I1003 18:13:04.685358   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:04.685695   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:05.185233   31648 type.go:168] "Request Body" body=""
	I1003 18:13:05.185306   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:05.185608   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:05.685179   31648 type.go:168] "Request Body" body=""
	I1003 18:13:05.685255   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:05.685582   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:05.685657   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:06.185409   31648 type.go:168] "Request Body" body=""
	I1003 18:13:06.185478   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:06.185807   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:06.685397   31648 type.go:168] "Request Body" body=""
	I1003 18:13:06.685483   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:06.685824   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:07.185410   31648 type.go:168] "Request Body" body=""
	I1003 18:13:07.185478   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:07.185799   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:07.685361   31648 type.go:168] "Request Body" body=""
	I1003 18:13:07.685444   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:07.685776   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:07.685829   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:08.185354   31648 type.go:168] "Request Body" body=""
	I1003 18:13:08.185422   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:08.185738   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:08.685299   31648 type.go:168] "Request Body" body=""
	I1003 18:13:08.685380   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:08.685725   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:09.185279   31648 type.go:168] "Request Body" body=""
	I1003 18:13:09.185348   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:09.185678   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:09.685236   31648 type.go:168] "Request Body" body=""
	I1003 18:13:09.685312   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:09.685643   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:10.185169   31648 type.go:168] "Request Body" body=""
	I1003 18:13:10.185241   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:10.185552   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:10.185605   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:10.685136   31648 type.go:168] "Request Body" body=""
	I1003 18:13:10.685223   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:10.685575   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:11.185384   31648 type.go:168] "Request Body" body=""
	I1003 18:13:11.185459   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:11.185788   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:11.685352   31648 type.go:168] "Request Body" body=""
	I1003 18:13:11.685433   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:11.685753   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:12.185074   31648 type.go:168] "Request Body" body=""
	I1003 18:13:12.185141   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:12.185467   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:12.685018   31648 type.go:168] "Request Body" body=""
	I1003 18:13:12.685103   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:12.685412   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:12.685475   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:13.184997   31648 type.go:168] "Request Body" body=""
	I1003 18:13:13.185070   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:13.185403   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:13.684967   31648 type.go:168] "Request Body" body=""
	I1003 18:13:13.685061   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:13.685364   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:14.184923   31648 type.go:168] "Request Body" body=""
	I1003 18:13:14.185026   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:14.185364   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:14.685214   31648 type.go:168] "Request Body" body=""
	I1003 18:13:14.685280   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:14.685641   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:14.685714   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:15.185156   31648 type.go:168] "Request Body" body=""
	I1003 18:13:15.185255   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:15.185584   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:15.685142   31648 type.go:168] "Request Body" body=""
	I1003 18:13:15.685204   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:15.685537   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:16.185388   31648 type.go:168] "Request Body" body=""
	I1003 18:13:16.185470   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:16.185814   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:16.685411   31648 type.go:168] "Request Body" body=""
	I1003 18:13:16.685497   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:16.685863   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:16.685936   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:17.185442   31648 type.go:168] "Request Body" body=""
	I1003 18:13:17.185509   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:17.185829   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:17.685415   31648 type.go:168] "Request Body" body=""
	I1003 18:13:17.685525   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:17.685881   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:18.185495   31648 type.go:168] "Request Body" body=""
	I1003 18:13:18.185563   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:18.185876   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:18.685159   31648 type.go:168] "Request Body" body=""
	I1003 18:13:18.685230   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:18.685527   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:19.185084   31648 type.go:168] "Request Body" body=""
	I1003 18:13:19.185161   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:19.185450   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:19.185506   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:19.685103   31648 type.go:168] "Request Body" body=""
	I1003 18:13:19.685191   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:19.685616   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:20.185169   31648 type.go:168] "Request Body" body=""
	I1003 18:13:20.185250   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:20.185540   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:20.685137   31648 type.go:168] "Request Body" body=""
	I1003 18:13:20.685209   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:20.685542   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:21.185328   31648 type.go:168] "Request Body" body=""
	I1003 18:13:21.185409   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:21.185747   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:21.185800   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:21.685330   31648 type.go:168] "Request Body" body=""
	I1003 18:13:21.685393   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:21.685693   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:22.185267   31648 type.go:168] "Request Body" body=""
	I1003 18:13:22.185361   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:22.185713   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:22.685319   31648 type.go:168] "Request Body" body=""
	I1003 18:13:22.685385   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:22.685724   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:23.185388   31648 type.go:168] "Request Body" body=""
	I1003 18:13:23.185472   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:23.185812   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:23.185875   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:23.685447   31648 type.go:168] "Request Body" body=""
	I1003 18:13:23.685515   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:23.685833   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:24.185390   31648 type.go:168] "Request Body" body=""
	I1003 18:13:24.185457   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:24.185762   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:24.685669   31648 type.go:168] "Request Body" body=""
	I1003 18:13:24.685745   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:24.686090   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:25.185723   31648 type.go:168] "Request Body" body=""
	I1003 18:13:25.185792   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:25.186120   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:25.186180   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:25.685886   31648 type.go:168] "Request Body" body=""
	I1003 18:13:25.685961   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:25.686311   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:26.185007   31648 type.go:168] "Request Body" body=""
	I1003 18:13:26.185071   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:26.185380   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:26.684952   31648 type.go:168] "Request Body" body=""
	I1003 18:13:26.685041   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:26.685347   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:27.185970   31648 type.go:168] "Request Body" body=""
	I1003 18:13:27.186046   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:27.186356   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:27.186405   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:27.685041   31648 type.go:168] "Request Body" body=""
	I1003 18:13:27.685106   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:27.685416   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:28.185003   31648 type.go:168] "Request Body" body=""
	I1003 18:13:28.185070   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:28.185403   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:28.684968   31648 type.go:168] "Request Body" body=""
	I1003 18:13:28.685055   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:28.685378   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:29.184912   31648 type.go:168] "Request Body" body=""
	I1003 18:13:29.185004   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:29.185313   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:29.686012   31648 type.go:168] "Request Body" body=""
	I1003 18:13:29.686076   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:29.686383   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:29.686435   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:30.184929   31648 type.go:168] "Request Body" body=""
	I1003 18:13:30.185073   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:30.185387   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:30.684930   31648 type.go:168] "Request Body" body=""
	I1003 18:13:30.685049   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:30.685367   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:31.185212   31648 type.go:168] "Request Body" body=""
	I1003 18:13:31.185277   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:31.185571   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:31.685142   31648 type.go:168] "Request Body" body=""
	I1003 18:13:31.685208   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:31.685504   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:32.185085   31648 type.go:168] "Request Body" body=""
	I1003 18:13:32.185151   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:32.185469   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:32.185524   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:32.685051   31648 type.go:168] "Request Body" body=""
	I1003 18:13:32.685118   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:32.685424   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:33.185022   31648 type.go:168] "Request Body" body=""
	I1003 18:13:33.185092   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:33.185392   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:33.684962   31648 type.go:168] "Request Body" body=""
	I1003 18:13:33.685058   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:33.685365   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:34.184958   31648 type.go:168] "Request Body" body=""
	I1003 18:13:34.185041   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:34.185342   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:34.685149   31648 type.go:168] "Request Body" body=""
	I1003 18:13:34.685221   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:34.685506   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:34.685560   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:35.185096   31648 type.go:168] "Request Body" body=""
	I1003 18:13:35.185162   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:35.185507   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:35.685072   31648 type.go:168] "Request Body" body=""
	I1003 18:13:35.685138   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:35.685436   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:36.185249   31648 type.go:168] "Request Body" body=""
	I1003 18:13:36.185312   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:36.185619   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:36.685207   31648 type.go:168] "Request Body" body=""
	I1003 18:13:36.685270   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:36.685603   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:36.685664   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:37.185187   31648 type.go:168] "Request Body" body=""
	I1003 18:13:37.185258   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:37.185604   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:37.685170   31648 type.go:168] "Request Body" body=""
	I1003 18:13:37.685238   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:37.685540   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:38.185094   31648 type.go:168] "Request Body" body=""
	I1003 18:13:38.185165   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:38.185480   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:38.685085   31648 type.go:168] "Request Body" body=""
	I1003 18:13:38.685154   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:38.685491   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:39.185087   31648 type.go:168] "Request Body" body=""
	I1003 18:13:39.185161   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:39.185473   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:39.185530   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:39.685041   31648 type.go:168] "Request Body" body=""
	I1003 18:13:39.685104   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:39.685443   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:40.184993   31648 type.go:168] "Request Body" body=""
	I1003 18:13:40.185060   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:40.185369   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:40.684957   31648 type.go:168] "Request Body" body=""
	I1003 18:13:40.685046   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:40.685391   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:41.185256   31648 type.go:168] "Request Body" body=""
	I1003 18:13:41.185323   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:41.185632   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:41.185691   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:41.685166   31648 type.go:168] "Request Body" body=""
	I1003 18:13:41.685236   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:41.685524   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:42.185147   31648 type.go:168] "Request Body" body=""
	I1003 18:13:42.185215   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:42.185512   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:42.685072   31648 type.go:168] "Request Body" body=""
	I1003 18:13:42.685137   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:42.685438   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:43.185039   31648 type.go:168] "Request Body" body=""
	I1003 18:13:43.185104   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:43.185400   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:43.684960   31648 type.go:168] "Request Body" body=""
	I1003 18:13:43.685045   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:43.685352   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:43.685405   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:44.184941   31648 type.go:168] "Request Body" body=""
	I1003 18:13:44.185024   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:44.185317   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:44.685052   31648 type.go:168] "Request Body" body=""
	I1003 18:13:44.685120   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:44.685425   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:45.185055   31648 type.go:168] "Request Body" body=""
	I1003 18:13:45.185131   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:45.185445   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:45.685028   31648 type.go:168] "Request Body" body=""
	I1003 18:13:45.685092   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:45.685396   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:45.685450   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:46.185196   31648 type.go:168] "Request Body" body=""
	I1003 18:13:46.185259   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:46.185598   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:46.685146   31648 type.go:168] "Request Body" body=""
	I1003 18:13:46.685207   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:46.685520   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:47.185085   31648 type.go:168] "Request Body" body=""
	I1003 18:13:47.185146   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:47.185435   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:47.685023   31648 type.go:168] "Request Body" body=""
	I1003 18:13:47.685083   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:47.685387   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:48.184938   31648 type.go:168] "Request Body" body=""
	I1003 18:13:48.185024   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:48.185317   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:48.185366   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:48.685968   31648 type.go:168] "Request Body" body=""
	I1003 18:13:48.686071   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:48.686392   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:49.184927   31648 type.go:168] "Request Body" body=""
	I1003 18:13:49.185007   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:49.185301   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:49.685951   31648 type.go:168] "Request Body" body=""
	I1003 18:13:49.686058   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:49.686375   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:50.185987   31648 type.go:168] "Request Body" body=""
	I1003 18:13:50.186049   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:50.186339   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:50.186393   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:50.686008   31648 type.go:168] "Request Body" body=""
	I1003 18:13:50.686095   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:50.686413   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:51.185213   31648 type.go:168] "Request Body" body=""
	I1003 18:13:51.185281   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:51.185558   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:51.685097   31648 type.go:168] "Request Body" body=""
	I1003 18:13:51.685183   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:51.685518   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:52.185069   31648 type.go:168] "Request Body" body=""
	I1003 18:13:52.185132   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:52.185409   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:52.685038   31648 type.go:168] "Request Body" body=""
	I1003 18:13:52.685113   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:52.685416   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:52.685468   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:53.184948   31648 type.go:168] "Request Body" body=""
	I1003 18:13:53.185026   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:53.185309   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:53.685950   31648 type.go:168] "Request Body" body=""
	I1003 18:13:53.686043   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:53.686348   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:54.185948   31648 type.go:168] "Request Body" body=""
	I1003 18:13:54.186022   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:54.186302   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:54.685064   31648 type.go:168] "Request Body" body=""
	I1003 18:13:54.685138   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:54.685429   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:54.685486   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:55.185055   31648 type.go:168] "Request Body" body=""
	I1003 18:13:55.185122   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:55.185388   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:55.685066   31648 type.go:168] "Request Body" body=""
	I1003 18:13:55.685164   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:55.685462   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:56.185338   31648 type.go:168] "Request Body" body=""
	I1003 18:13:56.185406   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:56.185704   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:56.685239   31648 type.go:168] "Request Body" body=""
	I1003 18:13:56.685304   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:56.685629   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:56.685684   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:57.185240   31648 type.go:168] "Request Body" body=""
	I1003 18:13:57.185305   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:57.185635   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:57.685223   31648 type.go:168] "Request Body" body=""
	I1003 18:13:57.685287   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:57.685578   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:58.185123   31648 type.go:168] "Request Body" body=""
	I1003 18:13:58.185189   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:58.185504   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:58.685074   31648 type.go:168] "Request Body" body=""
	I1003 18:13:58.685137   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:58.685464   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:59.185038   31648 type.go:168] "Request Body" body=""
	I1003 18:13:59.185102   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:59.185391   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:59.185441   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:59.684997   31648 type.go:168] "Request Body" body=""
	I1003 18:13:59.685066   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:59.685383   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:00.184957   31648 type.go:168] "Request Body" body=""
	I1003 18:14:00.185041   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:00.185348   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:00.685990   31648 type.go:168] "Request Body" body=""
	I1003 18:14:00.686052   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:00.686352   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:01.185220   31648 type.go:168] "Request Body" body=""
	I1003 18:14:01.185292   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:01.185619   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:14:01.185673   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:14:01.685170   31648 type.go:168] "Request Body" body=""
	I1003 18:14:01.685244   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:01.685572   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:02.185133   31648 type.go:168] "Request Body" body=""
	I1003 18:14:02.185197   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:02.185506   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:02.685118   31648 type.go:168] "Request Body" body=""
	I1003 18:14:02.685184   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:02.685488   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:03.185090   31648 type.go:168] "Request Body" body=""
	I1003 18:14:03.185159   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:03.185488   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:03.685055   31648 type.go:168] "Request Body" body=""
	I1003 18:14:03.685119   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:03.685428   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:14:03.685480   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:14:04.185061   31648 type.go:168] "Request Body" body=""
	I1003 18:14:04.185131   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:04.185458   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:04.685298   31648 type.go:168] "Request Body" body=""
	I1003 18:14:04.685366   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:04.685670   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:05.185278   31648 type.go:168] "Request Body" body=""
	I1003 18:14:05.185348   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:05.185711   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:05.685243   31648 type.go:168] "Request Body" body=""
	I1003 18:14:05.685313   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:05.685621   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:14:05.685670   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:14:06.185390   31648 type.go:168] "Request Body" body=""
	I1003 18:14:06.185454   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:06.185796   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:06.685338   31648 type.go:168] "Request Body" body=""
	I1003 18:14:06.685404   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:06.685744   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:07.185312   31648 type.go:168] "Request Body" body=""
	I1003 18:14:07.185375   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:07.185694   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:07.685319   31648 type.go:168] "Request Body" body=""
	I1003 18:14:07.685388   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:07.685720   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:14:07.685775   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:14:08.185299   31648 type.go:168] "Request Body" body=""
	I1003 18:14:08.185362   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:08.185681   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:08.685362   31648 type.go:168] "Request Body" body=""
	I1003 18:14:08.685501   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:08.686040   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:09.185088   31648 type.go:168] "Request Body" body=""
	I1003 18:14:09.185166   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:09.185492   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:09.685168   31648 type.go:168] "Request Body" body=""
	I1003 18:14:09.685230   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:09.685527   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:10.185203   31648 type.go:168] "Request Body" body=""
	I1003 18:14:10.185266   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:10.185584   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:14:10.185635   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:14:10.685306   31648 type.go:168] "Request Body" body=""
	I1003 18:14:10.685367   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:10.685706   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:11.185477   31648 type.go:168] "Request Body" body=""
	I1003 18:14:11.185545   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:11.185858   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:11.685629   31648 type.go:168] "Request Body" body=""
	I1003 18:14:11.685690   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:11.686017   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:12.185788   31648 type.go:168] "Request Body" body=""
	I1003 18:14:12.185850   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:12.186194   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:14:12.186261   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:14:12.685007   31648 type.go:168] "Request Body" body=""
	I1003 18:14:12.685075   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:12.685367   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:13.185078   31648 type.go:168] "Request Body" body=""
	I1003 18:14:13.185142   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:13.185434   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:13.685146   31648 type.go:168] "Request Body" body=""
	I1003 18:14:13.685215   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:13.685514   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:14.185200   31648 type.go:168] "Request Body" body=""
	I1003 18:14:14.185264   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:14.185577   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:14.685359   31648 type.go:168] "Request Body" body=""
	W1003 18:14:14.685420   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded
	I1003 18:14:14.685433   31648 node_ready.go:38] duration metric: took 6m0.000605507s for node "functional-889240" to be "Ready" ...
	I1003 18:14:14.688030   31648 out.go:203] 
	W1003 18:14:14.689379   31648 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1003 18:14:14.689402   31648 out.go:285] * 
	W1003 18:14:14.691089   31648 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:14:14.693118   31648 out.go:203] 
	
	
	==> CRI-O <==
	Oct 03 18:14:07 functional-889240 crio[2966]: time="2025-10-03T18:14:07.239192107Z" level=info msg="createCtr: removing container 072e4e9460dee9219f80ca505d4733bd0064816e717efde90762b7a102c27e9b" id=559bae75-fd42-4125-8d24-ff6dd69f00d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:07 functional-889240 crio[2966]: time="2025-10-03T18:14:07.23922293Z" level=info msg="createCtr: deleting container 072e4e9460dee9219f80ca505d4733bd0064816e717efde90762b7a102c27e9b from storage" id=559bae75-fd42-4125-8d24-ff6dd69f00d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:07 functional-889240 crio[2966]: time="2025-10-03T18:14:07.241163158Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-889240_kube-system_7e715cb6024854d45a9fa99576167e43_0" id=559bae75-fd42-4125-8d24-ff6dd69f00d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:09 functional-889240 crio[2966]: time="2025-10-03T18:14:09.212175329Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=19d07a45-fb10-41b2-9b94-8181c241e176 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:09 functional-889240 crio[2966]: time="2025-10-03T18:14:09.212940413Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=aa8cac25-319d-432c-a31a-d9b5de82fe6d name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:09 functional-889240 crio[2966]: time="2025-10-03T18:14:09.213820105Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-889240/kube-apiserver" id=18936f50-4957-42d2-bfc0-50b88dc6ed55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:09 functional-889240 crio[2966]: time="2025-10-03T18:14:09.21411552Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:14:09 functional-889240 crio[2966]: time="2025-10-03T18:14:09.218873126Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:14:09 functional-889240 crio[2966]: time="2025-10-03T18:14:09.219323296Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:14:09 functional-889240 crio[2966]: time="2025-10-03T18:14:09.234948332Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=18936f50-4957-42d2-bfc0-50b88dc6ed55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:09 functional-889240 crio[2966]: time="2025-10-03T18:14:09.23631674Z" level=info msg="createCtr: deleting container ID ed3ac05f1b6173e8965eba234b45cb1f88789049f41edd8a04d789a4ba7851fa from idIndex" id=18936f50-4957-42d2-bfc0-50b88dc6ed55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:09 functional-889240 crio[2966]: time="2025-10-03T18:14:09.236349339Z" level=info msg="createCtr: removing container ed3ac05f1b6173e8965eba234b45cb1f88789049f41edd8a04d789a4ba7851fa" id=18936f50-4957-42d2-bfc0-50b88dc6ed55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:09 functional-889240 crio[2966]: time="2025-10-03T18:14:09.236374758Z" level=info msg="createCtr: deleting container ed3ac05f1b6173e8965eba234b45cb1f88789049f41edd8a04d789a4ba7851fa from storage" id=18936f50-4957-42d2-bfc0-50b88dc6ed55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:09 functional-889240 crio[2966]: time="2025-10-03T18:14:09.23828998Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-889240_kube-system_c6bcf20a60b81dff297fc63f5b978297_0" id=18936f50-4957-42d2-bfc0-50b88dc6ed55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:11 functional-889240 crio[2966]: time="2025-10-03T18:14:11.211944062Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=258d909b-abe8-4bab-9eb9-154ce3bd057f name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:11 functional-889240 crio[2966]: time="2025-10-03T18:14:11.212772586Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=5fae6f93-a02d-4605-b1e0-241bc6b01232 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:11 functional-889240 crio[2966]: time="2025-10-03T18:14:11.213529051Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-889240/kube-scheduler" id=d9636d42-c403-4dac-a41c-9ad49be471b7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:11 functional-889240 crio[2966]: time="2025-10-03T18:14:11.213788313Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:14:11 functional-889240 crio[2966]: time="2025-10-03T18:14:11.216948054Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:14:11 functional-889240 crio[2966]: time="2025-10-03T18:14:11.217376826Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:14:11 functional-889240 crio[2966]: time="2025-10-03T18:14:11.236758404Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=d9636d42-c403-4dac-a41c-9ad49be471b7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:11 functional-889240 crio[2966]: time="2025-10-03T18:14:11.238136749Z" level=info msg="createCtr: deleting container ID 37e0b0de5fa174fc2fb7baf919d8bbbe8227a3244b2c4eeb5ab2e0fb435d641d from idIndex" id=d9636d42-c403-4dac-a41c-9ad49be471b7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:11 functional-889240 crio[2966]: time="2025-10-03T18:14:11.23816788Z" level=info msg="createCtr: removing container 37e0b0de5fa174fc2fb7baf919d8bbbe8227a3244b2c4eeb5ab2e0fb435d641d" id=d9636d42-c403-4dac-a41c-9ad49be471b7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:11 functional-889240 crio[2966]: time="2025-10-03T18:14:11.238203696Z" level=info msg="createCtr: deleting container 37e0b0de5fa174fc2fb7baf919d8bbbe8227a3244b2c4eeb5ab2e0fb435d641d from storage" id=d9636d42-c403-4dac-a41c-9ad49be471b7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:11 functional-889240 crio[2966]: time="2025-10-03T18:14:11.240064763Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-889240_kube-system_7dadd1df42d6a2c3d1907f134f7d5ea7_0" id=d9636d42-c403-4dac-a41c-9ad49be471b7 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:14:16.318158    4393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:14:16.318652    4393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:14:16.320225    4393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:14:16.320648    4393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:14:16.322119    4393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 3 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001870] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.374530] i8042: Warning: Keylock active
	[  +0.010846] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003424] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000658] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000637] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000691] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.479345] block sda: the capability attribute has been deprecated.
	[  +0.086934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025583] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.992810] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:14:16 up 56 min,  0 user,  load average: 0.14, 0.03, 0.04
	Linux functional-889240 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 03 18:14:07 functional-889240 kubelet[1817]:  > logger="UnhandledError"
	Oct 03 18:14:07 functional-889240 kubelet[1817]: E1003 18:14:07.241582    1817 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-889240" podUID="7e715cb6024854d45a9fa99576167e43"
	Oct 03 18:14:09 functional-889240 kubelet[1817]: E1003 18:14:09.211824    1817 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-889240\" not found" node="functional-889240"
	Oct 03 18:14:09 functional-889240 kubelet[1817]: E1003 18:14:09.238505    1817 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:14:09 functional-889240 kubelet[1817]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:14:09 functional-889240 kubelet[1817]:  > podSandboxID="bb5ee21569299932af0968d7ca6c3e44bd5f6c5d7c8e5900d54800ccc90ccf96"
	Oct 03 18:14:09 functional-889240 kubelet[1817]: E1003 18:14:09.238632    1817 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:14:09 functional-889240 kubelet[1817]:         container kube-apiserver start failed in pod kube-apiserver-functional-889240_kube-system(c6bcf20a60b81dff297fc63f5b978297): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:14:09 functional-889240 kubelet[1817]:  > logger="UnhandledError"
	Oct 03 18:14:09 functional-889240 kubelet[1817]: E1003 18:14:09.238673    1817 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-889240" podUID="c6bcf20a60b81dff297fc63f5b978297"
	Oct 03 18:14:09 functional-889240 kubelet[1817]: E1003 18:14:09.250684    1817 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-889240\" not found"
	Oct 03 18:14:09 functional-889240 kubelet[1817]: E1003 18:14:09.387666    1817 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 03 18:14:09 functional-889240 kubelet[1817]: E1003 18:14:09.890345    1817 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-889240?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 03 18:14:10 functional-889240 kubelet[1817]: I1003 18:14:10.086625    1817 kubelet_node_status.go:75] "Attempting to register node" node="functional-889240"
	Oct 03 18:14:10 functional-889240 kubelet[1817]: E1003 18:14:10.087008    1817 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-889240"
	Oct 03 18:14:11 functional-889240 kubelet[1817]: E1003 18:14:11.211567    1817 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-889240\" not found" node="functional-889240"
	Oct 03 18:14:11 functional-889240 kubelet[1817]: E1003 18:14:11.240340    1817 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:14:11 functional-889240 kubelet[1817]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:14:11 functional-889240 kubelet[1817]:  > podSandboxID="9ea0d784c2fd12bcd1db05033ba2964baa15be14deeae00b6508f924c37e3473"
	Oct 03 18:14:11 functional-889240 kubelet[1817]: E1003 18:14:11.240438    1817 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:14:11 functional-889240 kubelet[1817]:         container kube-scheduler start failed in pod kube-scheduler-functional-889240_kube-system(7dadd1df42d6a2c3d1907f134f7d5ea7): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:14:11 functional-889240 kubelet[1817]:  > logger="UnhandledError"
	Oct 03 18:14:11 functional-889240 kubelet[1817]: E1003 18:14:11.240489    1817 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-889240" podUID="7dadd1df42d6a2c3d1907f134f7d5ea7"
	Oct 03 18:14:11 functional-889240 kubelet[1817]: E1003 18:14:11.497556    1817 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-889240.186b0d404ae58a04\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-889240.186b0d404ae58a04  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-889240,UID:functional-889240,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-889240 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-889240,},FirstTimestamp:2025-10-03 18:04:09.203935748 +0000 UTC m=+0.376858749,LastTimestamp:2025-10-03 18:04:09.206706066 +0000 UTC m=+0.379629064,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,Reporting
Instance:functional-889240,}"
	Oct 03 18:14:16 functional-889240 kubelet[1817]: E1003 18:14:16.025257    1817 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-889240 -n functional-889240
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-889240 -n functional-889240: exit status 2 (304.6684ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-889240" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (366.08s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (1.98s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-889240 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-889240 get po -A: exit status 1 (53.514218ms)

                                                
                                                
** stderr ** 
	E1003 18:14:17.194431   35294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:14:17.194815   35294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:14:17.196202   35294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:14:17.196488   35294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:14:17.197834   35294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-889240 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"E1003 18:14:17.194431   35294 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1003 18:14:17.194815   35294 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1003 18:14:17.196202   35294 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1003 18:14:17.196488   35294 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1003 18:14:17.197834   35294 memc
ache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nThe connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-889240 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-889240 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-889240
helpers_test.go:243: (dbg) docker inspect functional-889240:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a",
	        "Created": "2025-10-03T17:59:56.619817507Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 26766,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T17:59:56.652603806Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/hostname",
	        "HostsPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/hosts",
	        "LogPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a-json.log",
	        "Name": "/functional-889240",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-889240:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-889240",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a",
	                "LowerDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2-init/diff:/var/lib/docker/overlay2/6a517a7375440eba803d7b83fe1e0821915758396dd4d8556ab64fff322a60c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-889240",
	                "Source": "/var/lib/docker/volumes/functional-889240/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-889240",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-889240",
	                "name.minikube.sigs.k8s.io": "functional-889240",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "da15d31dc23bdd4694ae9e3b61015d7ce0d61668c73d3e386422834c6f0321d8",
	            "SandboxKey": "/var/run/docker/netns/da15d31dc23b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-889240": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:9e:1d:e9:d9:ce",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "03281bed183d0817c0bc237b5c25093fc10222138aedde4c7deef5823759fa24",
	                    "EndpointID": "28fa584fdd6e253816ae08a2460ef02b91085c8a7996d55008876e3bd65bbc7e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-889240",
	                        "9f4f0f10b4a9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-889240 -n functional-889240
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-889240 -n functional-889240: exit status 2 (286.969227ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 logs -n 25
helpers_test.go:260: TestFunctional/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-455553                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-455553   │ jenkins │ v1.37.0 │ 03 Oct 25 17:42 UTC │ 03 Oct 25 17:42 UTC │
	│ start   │ --download-only -p download-docker-423289 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-423289 │ jenkins │ v1.37.0 │ 03 Oct 25 17:42 UTC │                     │
	│ delete  │ -p download-docker-423289                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-423289 │ jenkins │ v1.37.0 │ 03 Oct 25 17:42 UTC │ 03 Oct 25 17:42 UTC │
	│ start   │ --download-only -p binary-mirror-626924 --alsologtostderr --binary-mirror http://127.0.0.1:44037 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-626924   │ jenkins │ v1.37.0 │ 03 Oct 25 17:42 UTC │                     │
	│ delete  │ -p binary-mirror-626924                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-626924   │ jenkins │ v1.37.0 │ 03 Oct 25 17:42 UTC │ 03 Oct 25 17:42 UTC │
	│ addons  │ disable dashboard -p addons-051972                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-051972          │ jenkins │ v1.37.0 │ 03 Oct 25 17:42 UTC │                     │
	│ addons  │ enable dashboard -p addons-051972                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-051972          │ jenkins │ v1.37.0 │ 03 Oct 25 17:42 UTC │                     │
	│ start   │ -p addons-051972 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-051972          │ jenkins │ v1.37.0 │ 03 Oct 25 17:42 UTC │                     │
	│ delete  │ -p addons-051972                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-051972          │ jenkins │ v1.37.0 │ 03 Oct 25 17:51 UTC │ 03 Oct 25 17:51 UTC │
	│ start   │ -p nospam-093146 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-093146 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                  │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:51 UTC │                     │
	│ start   │ nospam-093146 --log_dir /tmp/nospam-093146 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │                     │
	│ start   │ nospam-093146 --log_dir /tmp/nospam-093146 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │                     │
	│ start   │ nospam-093146 --log_dir /tmp/nospam-093146 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │                     │
	│ pause   │ nospam-093146 --log_dir /tmp/nospam-093146 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ pause   │ nospam-093146 --log_dir /tmp/nospam-093146 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ pause   │ nospam-093146 --log_dir /tmp/nospam-093146 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ unpause │ nospam-093146 --log_dir /tmp/nospam-093146 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ unpause │ nospam-093146 --log_dir /tmp/nospam-093146 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ unpause │ nospam-093146 --log_dir /tmp/nospam-093146 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ stop    │ nospam-093146 --log_dir /tmp/nospam-093146 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ stop    │ nospam-093146 --log_dir /tmp/nospam-093146 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ stop    │ nospam-093146 --log_dir /tmp/nospam-093146 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ delete  │ -p nospam-093146                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-093146          │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ start   │ -p functional-889240 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                            │ functional-889240      │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │                     │
	│ start   │ -p functional-889240 --alsologtostderr -v=8                                                                                                                                                                                                                                                                                                                                                                                                                              │ functional-889240      │ jenkins │ v1.37.0 │ 03 Oct 25 18:08 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:08:11
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:08:11.068231   31648 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:08:11.068486   31648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:08:11.068496   31648 out.go:374] Setting ErrFile to fd 2...
	I1003 18:08:11.068502   31648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:08:11.068729   31648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:08:11.069215   31648 out.go:368] Setting JSON to false
	I1003 18:08:11.070085   31648 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3042,"bootTime":1759511849,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:08:11.070168   31648 start.go:140] virtualization: kvm guest
	I1003 18:08:11.073397   31648 out.go:179] * [functional-889240] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 18:08:11.074567   31648 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:08:11.074571   31648 notify.go:220] Checking for updates...
	I1003 18:08:11.077123   31648 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:08:11.078380   31648 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:08:11.079542   31648 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:08:11.080665   31648 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:08:11.081754   31648 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:08:11.083246   31648 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:08:11.083337   31648 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:08:11.109195   31648 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:08:11.109276   31648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:08:11.161161   31648 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-03 18:08:11.151693527 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:08:11.161260   31648 docker.go:318] overlay module found
	I1003 18:08:11.162933   31648 out.go:179] * Using the docker driver based on existing profile
	I1003 18:08:11.164103   31648 start.go:304] selected driver: docker
	I1003 18:08:11.164115   31648 start.go:924] validating driver "docker" against &{Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:08:11.164183   31648 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:08:11.164266   31648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:08:11.217384   31648 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-03 18:08:11.207171248 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:08:11.218094   31648 cni.go:84] Creating CNI manager for ""
	I1003 18:08:11.218156   31648 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 18:08:11.218200   31648 start.go:348] cluster config:
	{Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:08:11.220110   31648 out.go:179] * Starting "functional-889240" primary control-plane node in "functional-889240" cluster
	I1003 18:08:11.221257   31648 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:08:11.222336   31648 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:08:11.223595   31648 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:08:11.223644   31648 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 18:08:11.223654   31648 cache.go:58] Caching tarball of preloaded images
	I1003 18:08:11.223686   31648 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:08:11.223758   31648 preload.go:233] Found /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1003 18:08:11.223772   31648 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 18:08:11.223859   31648 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/config.json ...
	I1003 18:08:11.242913   31648 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 18:08:11.242930   31648 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 18:08:11.242946   31648 cache.go:232] Successfully downloaded all kic artifacts
	I1003 18:08:11.242988   31648 start.go:360] acquireMachinesLock for functional-889240: {Name:mk6750a9fb1c1c3747b0abf2aebe2a2d0047ae3a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:08:11.243063   31648 start.go:364] duration metric: took 50.516µs to acquireMachinesLock for "functional-889240"
	I1003 18:08:11.243090   31648 start.go:96] Skipping create...Using existing machine configuration
	I1003 18:08:11.243097   31648 fix.go:54] fixHost starting: 
	I1003 18:08:11.243298   31648 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
	I1003 18:08:11.259925   31648 fix.go:112] recreateIfNeeded on functional-889240: state=Running err=<nil>
	W1003 18:08:11.259951   31648 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 18:08:11.261699   31648 out.go:252] * Updating the running docker "functional-889240" container ...
	I1003 18:08:11.261731   31648 machine.go:93] provisionDockerMachine start ...
	I1003 18:08:11.261806   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:11.278828   31648 main.go:141] libmachine: Using SSH client type: native
	I1003 18:08:11.279109   31648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:08:11.279121   31648 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 18:08:11.421621   31648 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-889240
	
	I1003 18:08:11.421642   31648 ubuntu.go:182] provisioning hostname "functional-889240"
	I1003 18:08:11.421693   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:11.439154   31648 main.go:141] libmachine: Using SSH client type: native
	I1003 18:08:11.439372   31648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:08:11.439384   31648 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-889240 && echo "functional-889240" | sudo tee /etc/hostname
	I1003 18:08:11.590164   31648 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-889240
	
	I1003 18:08:11.590238   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:11.607612   31648 main.go:141] libmachine: Using SSH client type: native
	I1003 18:08:11.607822   31648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:08:11.607839   31648 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-889240' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-889240/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-889240' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:08:11.750385   31648 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:08:11.750412   31648 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8669/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8669/.minikube}
	I1003 18:08:11.750443   31648 ubuntu.go:190] setting up certificates
	I1003 18:08:11.750454   31648 provision.go:84] configureAuth start
	I1003 18:08:11.750512   31648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-889240
	I1003 18:08:11.767416   31648 provision.go:143] copyHostCerts
	I1003 18:08:11.767453   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:08:11.767484   31648 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem, removing ...
	I1003 18:08:11.767498   31648 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:08:11.767564   31648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem (1082 bytes)
	I1003 18:08:11.767659   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:08:11.767679   31648 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem, removing ...
	I1003 18:08:11.767686   31648 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:08:11.767714   31648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem (1123 bytes)
	I1003 18:08:11.767934   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:08:11.768183   31648 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem, removing ...
	I1003 18:08:11.768200   31648 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:08:11.768251   31648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem (1675 bytes)
	I1003 18:08:11.768350   31648 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem org=jenkins.functional-889240 san=[127.0.0.1 192.168.49.2 functional-889240 localhost minikube]
	I1003 18:08:11.920440   31648 provision.go:177] copyRemoteCerts
	I1003 18:08:11.920514   31648 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:08:11.920551   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:11.938061   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:12.037875   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 18:08:12.037937   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1003 18:08:12.054720   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 18:08:12.054773   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 18:08:12.071055   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 18:08:12.071110   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 18:08:12.087547   31648 provision.go:87] duration metric: took 337.079976ms to configureAuth
	I1003 18:08:12.087574   31648 ubuntu.go:206] setting minikube options for container-runtime
	I1003 18:08:12.087766   31648 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:08:12.087867   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:12.105048   31648 main.go:141] libmachine: Using SSH client type: native
	I1003 18:08:12.105289   31648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:08:12.105305   31648 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 18:08:12.366340   31648 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 18:08:12.366367   31648 machine.go:96] duration metric: took 1.104629442s to provisionDockerMachine
	I1003 18:08:12.366377   31648 start.go:293] postStartSetup for "functional-889240" (driver="docker")
	I1003 18:08:12.366388   31648 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:08:12.366431   31648 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:08:12.366476   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:12.383468   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:12.483988   31648 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:08:12.487264   31648 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1003 18:08:12.487282   31648 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1003 18:08:12.487289   31648 command_runner.go:130] > VERSION_ID="12"
	I1003 18:08:12.487295   31648 command_runner.go:130] > VERSION="12 (bookworm)"
	I1003 18:08:12.487301   31648 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1003 18:08:12.487306   31648 command_runner.go:130] > ID=debian
	I1003 18:08:12.487313   31648 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1003 18:08:12.487320   31648 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1003 18:08:12.487329   31648 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1003 18:08:12.487402   31648 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:08:12.487425   31648 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 18:08:12.487438   31648 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/addons for local assets ...
	I1003 18:08:12.487491   31648 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/files for local assets ...
	I1003 18:08:12.487581   31648 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> 122122.pem in /etc/ssl/certs
	I1003 18:08:12.487593   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /etc/ssl/certs/122122.pem
	I1003 18:08:12.487688   31648 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/test/nested/copy/12212/hosts -> hosts in /etc/test/nested/copy/12212
	I1003 18:08:12.487697   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/test/nested/copy/12212/hosts -> /etc/test/nested/copy/12212/hosts
	I1003 18:08:12.487740   31648 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/12212
	I1003 18:08:12.495127   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:08:12.511597   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/test/nested/copy/12212/hosts --> /etc/test/nested/copy/12212/hosts (40 bytes)
	I1003 18:08:12.528571   31648 start.go:296] duration metric: took 162.180752ms for postStartSetup
	I1003 18:08:12.528647   31648 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:08:12.528710   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:12.546258   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:12.643641   31648 command_runner.go:130] > 39%
	I1003 18:08:12.643858   31648 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:08:12.648017   31648 command_runner.go:130] > 179G
	I1003 18:08:12.648284   31648 fix.go:56] duration metric: took 1.405183874s for fixHost
	I1003 18:08:12.648303   31648 start.go:83] releasing machines lock for "functional-889240", held for 1.405223544s
	I1003 18:08:12.648364   31648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-889240
	I1003 18:08:12.665548   31648 ssh_runner.go:195] Run: cat /version.json
	I1003 18:08:12.665589   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:12.665627   31648 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:08:12.665684   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:12.683771   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:12.684037   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:12.833728   31648 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1003 18:08:12.833784   31648 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759382731-21643", "minikube_version": "v1.37.0", "commit": "b0c70dd4d342e6443a02916e52d246d8cdb181c4"}
	I1003 18:08:12.833903   31648 ssh_runner.go:195] Run: systemctl --version
	I1003 18:08:12.840008   31648 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1003 18:08:12.840056   31648 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1003 18:08:12.840282   31648 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 18:08:12.874135   31648 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1003 18:08:12.878285   31648 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1003 18:08:12.878575   31648 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 18:08:12.878637   31648 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 18:08:12.886227   31648 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1003 18:08:12.886250   31648 start.go:495] detecting cgroup driver to use...
	I1003 18:08:12.886282   31648 detect.go:190] detected "systemd" cgroup driver on host os
	I1003 18:08:12.886327   31648 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 18:08:12.900106   31648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 18:08:12.911429   31648 docker.go:218] disabling cri-docker service (if available) ...
	I1003 18:08:12.911477   31648 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 18:08:12.925289   31648 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 18:08:12.936739   31648 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 18:08:13.020667   31648 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 18:08:13.102263   31648 docker.go:234] disabling docker service ...
	I1003 18:08:13.102328   31648 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 18:08:13.115759   31648 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 18:08:13.127581   31648 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 18:08:13.208801   31648 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 18:08:13.298232   31648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 18:08:13.314511   31648 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:08:13.327949   31648 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1003 18:08:13.328859   31648 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 18:08:13.328914   31648 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.337658   31648 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 18:08:13.337709   31648 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.346162   31648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.354712   31648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.363098   31648 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:08:13.370793   31648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.378940   31648 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.386700   31648 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.394938   31648 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:08:13.401467   31648 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1003 18:08:13.402164   31648 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:08:13.409040   31648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:08:13.496423   31648 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 18:08:13.599891   31648 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 18:08:13.599956   31648 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 18:08:13.603739   31648 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1003 18:08:13.603760   31648 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1003 18:08:13.603769   31648 command_runner.go:130] > Device: 0,59	Inode: 3868        Links: 1
	I1003 18:08:13.603779   31648 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1003 18:08:13.603787   31648 command_runner.go:130] > Access: 2025-10-03 18:08:13.582699245 +0000
	I1003 18:08:13.603796   31648 command_runner.go:130] > Modify: 2025-10-03 18:08:13.582699245 +0000
	I1003 18:08:13.603806   31648 command_runner.go:130] > Change: 2025-10-03 18:08:13.582699245 +0000
	I1003 18:08:13.603811   31648 command_runner.go:130] >  Birth: 2025-10-03 18:08:13.582699245 +0000
	I1003 18:08:13.603837   31648 start.go:563] Will wait 60s for crictl version
	I1003 18:08:13.603884   31648 ssh_runner.go:195] Run: which crictl
	I1003 18:08:13.607403   31648 command_runner.go:130] > /usr/local/bin/crictl
	I1003 18:08:13.607458   31648 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 18:08:13.630641   31648 command_runner.go:130] > Version:  0.1.0
	I1003 18:08:13.630667   31648 command_runner.go:130] > RuntimeName:  cri-o
	I1003 18:08:13.630673   31648 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1003 18:08:13.630680   31648 command_runner.go:130] > RuntimeApiVersion:  v1
	I1003 18:08:13.630699   31648 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 18:08:13.630764   31648 ssh_runner.go:195] Run: crio --version
	I1003 18:08:13.656303   31648 command_runner.go:130] > crio version 1.34.1
	I1003 18:08:13.656324   31648 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1003 18:08:13.656329   31648 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1003 18:08:13.656339   31648 command_runner.go:130] >    GitTreeState:   dirty
	I1003 18:08:13.656344   31648 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1003 18:08:13.656348   31648 command_runner.go:130] >    GoVersion:      go1.24.6
	I1003 18:08:13.656352   31648 command_runner.go:130] >    Compiler:       gc
	I1003 18:08:13.656365   31648 command_runner.go:130] >    Platform:       linux/amd64
	I1003 18:08:13.656372   31648 command_runner.go:130] >    Linkmode:       static
	I1003 18:08:13.656378   31648 command_runner.go:130] >    BuildTags:
	I1003 18:08:13.656383   31648 command_runner.go:130] >      static
	I1003 18:08:13.656387   31648 command_runner.go:130] >      netgo
	I1003 18:08:13.656393   31648 command_runner.go:130] >      osusergo
	I1003 18:08:13.656396   31648 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1003 18:08:13.656402   31648 command_runner.go:130] >      seccomp
	I1003 18:08:13.656405   31648 command_runner.go:130] >      apparmor
	I1003 18:08:13.656410   31648 command_runner.go:130] >      selinux
	I1003 18:08:13.656415   31648 command_runner.go:130] >    LDFlags:          unknown
	I1003 18:08:13.656421   31648 command_runner.go:130] >    SeccompEnabled:   true
	I1003 18:08:13.656426   31648 command_runner.go:130] >    AppArmorEnabled:  false
	I1003 18:08:13.657588   31648 ssh_runner.go:195] Run: crio --version
	I1003 18:08:13.682656   31648 command_runner.go:130] > crio version 1.34.1
	I1003 18:08:13.682693   31648 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1003 18:08:13.682698   31648 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1003 18:08:13.682703   31648 command_runner.go:130] >    GitTreeState:   dirty
	I1003 18:08:13.682708   31648 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1003 18:08:13.682712   31648 command_runner.go:130] >    GoVersion:      go1.24.6
	I1003 18:08:13.682716   31648 command_runner.go:130] >    Compiler:       gc
	I1003 18:08:13.682720   31648 command_runner.go:130] >    Platform:       linux/amd64
	I1003 18:08:13.682724   31648 command_runner.go:130] >    Linkmode:       static
	I1003 18:08:13.682728   31648 command_runner.go:130] >    BuildTags:
	I1003 18:08:13.682733   31648 command_runner.go:130] >      static
	I1003 18:08:13.682737   31648 command_runner.go:130] >      netgo
	I1003 18:08:13.682741   31648 command_runner.go:130] >      osusergo
	I1003 18:08:13.682746   31648 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1003 18:08:13.682753   31648 command_runner.go:130] >      seccomp
	I1003 18:08:13.682756   31648 command_runner.go:130] >      apparmor
	I1003 18:08:13.682759   31648 command_runner.go:130] >      selinux
	I1003 18:08:13.682763   31648 command_runner.go:130] >    LDFlags:          unknown
	I1003 18:08:13.682770   31648 command_runner.go:130] >    SeccompEnabled:   true
	I1003 18:08:13.682774   31648 command_runner.go:130] >    AppArmorEnabled:  false
	I1003 18:08:13.685817   31648 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 18:08:13.686852   31648 cli_runner.go:164] Run: docker network inspect functional-889240 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:08:13.703291   31648 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 18:08:13.707207   31648 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1003 18:08:13.707295   31648 kubeadm.go:883] updating cluster {Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 18:08:13.707417   31648 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:08:13.707473   31648 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:08:13.737725   31648 command_runner.go:130] > {
	I1003 18:08:13.737745   31648 command_runner.go:130] >   "images":  [
	I1003 18:08:13.737749   31648 command_runner.go:130] >     {
	I1003 18:08:13.737755   31648 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1003 18:08:13.737763   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.737773   31648 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1003 18:08:13.737780   31648 command_runner.go:130] >       ],
	I1003 18:08:13.737786   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.737798   31648 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1003 18:08:13.737807   31648 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1003 18:08:13.737811   31648 command_runner.go:130] >       ],
	I1003 18:08:13.737815   31648 command_runner.go:130] >       "size":  "109379124",
	I1003 18:08:13.737819   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.737828   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.737832   31648 command_runner.go:130] >     },
	I1003 18:08:13.737835   31648 command_runner.go:130] >     {
	I1003 18:08:13.737841   31648 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1003 18:08:13.737848   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.737859   31648 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1003 18:08:13.737868   31648 command_runner.go:130] >       ],
	I1003 18:08:13.737875   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.737886   31648 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1003 18:08:13.737898   31648 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1003 18:08:13.737904   31648 command_runner.go:130] >       ],
	I1003 18:08:13.737908   31648 command_runner.go:130] >       "size":  "31470524",
	I1003 18:08:13.737914   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.737920   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.737931   31648 command_runner.go:130] >     },
	I1003 18:08:13.737939   31648 command_runner.go:130] >     {
	I1003 18:08:13.737948   31648 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1003 18:08:13.737958   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.737969   31648 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1003 18:08:13.737987   31648 command_runner.go:130] >       ],
	I1003 18:08:13.737995   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738007   31648 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1003 18:08:13.738023   31648 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1003 18:08:13.738031   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738037   31648 command_runner.go:130] >       "size":  "76103547",
	I1003 18:08:13.738045   31648 command_runner.go:130] >       "username":  "nonroot",
	I1003 18:08:13.738049   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.738054   31648 command_runner.go:130] >     },
	I1003 18:08:13.738058   31648 command_runner.go:130] >     {
	I1003 18:08:13.738070   31648 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1003 18:08:13.738081   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.738091   31648 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1003 18:08:13.738100   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738110   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738124   31648 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1003 18:08:13.738137   31648 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1003 18:08:13.738143   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738148   31648 command_runner.go:130] >       "size":  "195976448",
	I1003 18:08:13.738155   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.738165   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.738175   31648 command_runner.go:130] >       },
	I1003 18:08:13.738187   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.738197   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.738205   31648 command_runner.go:130] >     },
	I1003 18:08:13.738212   31648 command_runner.go:130] >     {
	I1003 18:08:13.738223   31648 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1003 18:08:13.738230   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.738236   31648 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1003 18:08:13.738245   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738256   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738270   31648 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1003 18:08:13.738285   31648 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1003 18:08:13.738293   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738301   31648 command_runner.go:130] >       "size":  "89046001",
	I1003 18:08:13.738308   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.738312   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.738315   31648 command_runner.go:130] >       },
	I1003 18:08:13.738320   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.738329   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.738338   31648 command_runner.go:130] >     },
	I1003 18:08:13.738344   31648 command_runner.go:130] >     {
	I1003 18:08:13.738357   31648 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1003 18:08:13.738366   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.738377   31648 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1003 18:08:13.738386   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738395   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738402   31648 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1003 18:08:13.738418   31648 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1003 18:08:13.738427   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738434   31648 command_runner.go:130] >       "size":  "76004181",
	I1003 18:08:13.738443   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.738453   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.738460   31648 command_runner.go:130] >       },
	I1003 18:08:13.738467   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.738475   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.738480   31648 command_runner.go:130] >     },
	I1003 18:08:13.738484   31648 command_runner.go:130] >     {
	I1003 18:08:13.738493   31648 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1003 18:08:13.738502   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.738514   31648 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1003 18:08:13.738522   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738531   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738545   31648 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1003 18:08:13.738560   31648 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1003 18:08:13.738568   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738572   31648 command_runner.go:130] >       "size":  "73138073",
	I1003 18:08:13.738580   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.738586   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.738595   31648 command_runner.go:130] >     },
	I1003 18:08:13.738605   31648 command_runner.go:130] >     {
	I1003 18:08:13.738617   31648 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1003 18:08:13.738625   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.738634   31648 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1003 18:08:13.738642   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738648   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738658   31648 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1003 18:08:13.738674   31648 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1003 18:08:13.738683   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738693   31648 command_runner.go:130] >       "size":  "53844823",
	I1003 18:08:13.738702   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.738710   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.738718   31648 command_runner.go:130] >       },
	I1003 18:08:13.738724   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.738733   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.738743   31648 command_runner.go:130] >     },
	I1003 18:08:13.738747   31648 command_runner.go:130] >     {
	I1003 18:08:13.738756   31648 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1003 18:08:13.738766   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.738777   31648 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1003 18:08:13.738785   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738792   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738806   31648 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1003 18:08:13.738819   31648 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1003 18:08:13.738827   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738832   31648 command_runner.go:130] >       "size":  "742092",
	I1003 18:08:13.738838   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.738843   31648 command_runner.go:130] >         "value":  "65535"
	I1003 18:08:13.738851   31648 command_runner.go:130] >       },
	I1003 18:08:13.738862   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.738871   31648 command_runner.go:130] >       "pinned":  true
	I1003 18:08:13.738885   31648 command_runner.go:130] >     }
	I1003 18:08:13.738890   31648 command_runner.go:130] >   ]
	I1003 18:08:13.738898   31648 command_runner.go:130] > }
	I1003 18:08:13.739109   31648 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:08:13.739126   31648 crio.go:433] Images already preloaded, skipping extraction
	I1003 18:08:13.739173   31648 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:08:13.761526   31648 command_runner.go:130] > {
	I1003 18:08:13.761550   31648 command_runner.go:130] >   "images":  [
	I1003 18:08:13.761558   31648 command_runner.go:130] >     {
	I1003 18:08:13.761569   31648 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1003 18:08:13.761577   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.761586   31648 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1003 18:08:13.761592   31648 command_runner.go:130] >       ],
	I1003 18:08:13.761599   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.761616   31648 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1003 18:08:13.761631   31648 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1003 18:08:13.761639   31648 command_runner.go:130] >       ],
	I1003 18:08:13.761646   31648 command_runner.go:130] >       "size":  "109379124",
	I1003 18:08:13.761659   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.761672   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.761681   31648 command_runner.go:130] >     },
	I1003 18:08:13.761686   31648 command_runner.go:130] >     {
	I1003 18:08:13.761698   31648 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1003 18:08:13.761708   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.761719   31648 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1003 18:08:13.761728   31648 command_runner.go:130] >       ],
	I1003 18:08:13.761737   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.761753   31648 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1003 18:08:13.761770   31648 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1003 18:08:13.761779   31648 command_runner.go:130] >       ],
	I1003 18:08:13.761789   31648 command_runner.go:130] >       "size":  "31470524",
	I1003 18:08:13.761799   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.761810   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.761818   31648 command_runner.go:130] >     },
	I1003 18:08:13.761823   31648 command_runner.go:130] >     {
	I1003 18:08:13.761836   31648 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1003 18:08:13.761845   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.761852   31648 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1003 18:08:13.761860   31648 command_runner.go:130] >       ],
	I1003 18:08:13.761866   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.761879   31648 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1003 18:08:13.761889   31648 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1003 18:08:13.761897   31648 command_runner.go:130] >       ],
	I1003 18:08:13.761903   31648 command_runner.go:130] >       "size":  "76103547",
	I1003 18:08:13.761913   31648 command_runner.go:130] >       "username":  "nonroot",
	I1003 18:08:13.761922   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.761934   31648 command_runner.go:130] >     },
	I1003 18:08:13.761942   31648 command_runner.go:130] >     {
	I1003 18:08:13.761952   31648 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1003 18:08:13.761960   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.761970   31648 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1003 18:08:13.762000   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762008   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.762019   31648 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1003 18:08:13.762032   31648 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1003 18:08:13.762041   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762051   31648 command_runner.go:130] >       "size":  "195976448",
	I1003 18:08:13.762060   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.762068   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.762074   31648 command_runner.go:130] >       },
	I1003 18:08:13.762087   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.762097   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.762101   31648 command_runner.go:130] >     },
	I1003 18:08:13.762109   31648 command_runner.go:130] >     {
	I1003 18:08:13.762117   31648 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1003 18:08:13.762126   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.762135   31648 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1003 18:08:13.762143   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762149   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.762163   31648 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1003 18:08:13.762178   31648 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1003 18:08:13.762186   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762193   31648 command_runner.go:130] >       "size":  "89046001",
	I1003 18:08:13.762202   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.762212   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.762221   31648 command_runner.go:130] >       },
	I1003 18:08:13.762229   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.762239   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.762248   31648 command_runner.go:130] >     },
	I1003 18:08:13.762256   31648 command_runner.go:130] >     {
	I1003 18:08:13.762265   31648 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1003 18:08:13.762275   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.762284   31648 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1003 18:08:13.762292   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762303   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.762319   31648 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1003 18:08:13.762335   31648 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1003 18:08:13.762343   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762353   31648 command_runner.go:130] >       "size":  "76004181",
	I1003 18:08:13.762361   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.762367   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.762374   31648 command_runner.go:130] >       },
	I1003 18:08:13.762380   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.762388   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.762392   31648 command_runner.go:130] >     },
	I1003 18:08:13.762401   31648 command_runner.go:130] >     {
	I1003 18:08:13.762412   31648 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1003 18:08:13.762422   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.762431   31648 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1003 18:08:13.762438   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762444   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.762456   31648 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1003 18:08:13.762468   31648 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1003 18:08:13.762477   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762487   31648 command_runner.go:130] >       "size":  "73138073",
	I1003 18:08:13.762497   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.762506   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.762515   31648 command_runner.go:130] >     },
	I1003 18:08:13.762523   31648 command_runner.go:130] >     {
	I1003 18:08:13.762533   31648 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1003 18:08:13.762539   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.762547   31648 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1003 18:08:13.762552   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762559   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.762570   31648 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1003 18:08:13.762593   31648 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1003 18:08:13.762602   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762608   31648 command_runner.go:130] >       "size":  "53844823",
	I1003 18:08:13.762616   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.762623   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.762630   31648 command_runner.go:130] >       },
	I1003 18:08:13.762636   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.762645   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.762653   31648 command_runner.go:130] >     },
	I1003 18:08:13.762657   31648 command_runner.go:130] >     {
	I1003 18:08:13.762665   31648 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1003 18:08:13.762671   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.762681   31648 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1003 18:08:13.762686   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762695   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.762706   31648 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1003 18:08:13.762720   31648 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1003 18:08:13.762728   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762732   31648 command_runner.go:130] >       "size":  "742092",
	I1003 18:08:13.762737   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.762742   31648 command_runner.go:130] >         "value":  "65535"
	I1003 18:08:13.762747   31648 command_runner.go:130] >       },
	I1003 18:08:13.762751   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.762757   31648 command_runner.go:130] >       "pinned":  true
	I1003 18:08:13.762761   31648 command_runner.go:130] >     }
	I1003 18:08:13.762766   31648 command_runner.go:130] >   ]
	I1003 18:08:13.762769   31648 command_runner.go:130] > }
	I1003 18:08:13.763568   31648 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:08:13.763587   31648 cache_images.go:85] Images are preloaded, skipping loading
	I1003 18:08:13.763596   31648 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1003 18:08:13.763703   31648 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-889240 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 18:08:13.763779   31648 ssh_runner.go:195] Run: crio config
	I1003 18:08:13.802487   31648 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1003 18:08:13.802512   31648 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1003 18:08:13.802523   31648 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1003 18:08:13.802528   31648 command_runner.go:130] > #
	I1003 18:08:13.802538   31648 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1003 18:08:13.802546   31648 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1003 18:08:13.802555   31648 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1003 18:08:13.802566   31648 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1003 18:08:13.802572   31648 command_runner.go:130] > # reload'.
	I1003 18:08:13.802583   31648 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1003 18:08:13.802595   31648 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1003 18:08:13.802606   31648 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1003 18:08:13.802615   31648 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1003 18:08:13.802622   31648 command_runner.go:130] > [crio]
	I1003 18:08:13.802632   31648 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1003 18:08:13.802640   31648 command_runner.go:130] > # containers images, in this directory.
	I1003 18:08:13.802653   31648 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1003 18:08:13.802671   31648 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1003 18:08:13.802680   31648 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1003 18:08:13.802693   31648 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1003 18:08:13.802704   31648 command_runner.go:130] > # imagestore = ""
	I1003 18:08:13.802714   31648 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1003 18:08:13.802726   31648 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1003 18:08:13.802736   31648 command_runner.go:130] > # storage_driver = "overlay"
	I1003 18:08:13.802747   31648 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1003 18:08:13.802761   31648 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1003 18:08:13.802770   31648 command_runner.go:130] > # storage_option = [
	I1003 18:08:13.802777   31648 command_runner.go:130] > # ]
	I1003 18:08:13.802788   31648 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1003 18:08:13.802800   31648 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1003 18:08:13.802808   31648 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1003 18:08:13.802820   31648 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1003 18:08:13.802830   31648 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1003 18:08:13.802835   31648 command_runner.go:130] > # always happen on a node reboot
	I1003 18:08:13.802840   31648 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1003 18:08:13.802849   31648 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1003 18:08:13.802860   31648 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1003 18:08:13.802865   31648 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1003 18:08:13.802871   31648 command_runner.go:130] > # version_file_persist = ""
	I1003 18:08:13.802882   31648 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1003 18:08:13.802899   31648 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1003 18:08:13.802906   31648 command_runner.go:130] > # internal_wipe = true
	I1003 18:08:13.802917   31648 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1003 18:08:13.802929   31648 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1003 18:08:13.802935   31648 command_runner.go:130] > # internal_repair = true
	I1003 18:08:13.802943   31648 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1003 18:08:13.802953   31648 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1003 18:08:13.802966   31648 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1003 18:08:13.802985   31648 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1003 18:08:13.802996   31648 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1003 18:08:13.803006   31648 command_runner.go:130] > [crio.api]
	I1003 18:08:13.803015   31648 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1003 18:08:13.803025   31648 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1003 18:08:13.803033   31648 command_runner.go:130] > # IP address on which the stream server will listen.
	I1003 18:08:13.803043   31648 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1003 18:08:13.803054   31648 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1003 18:08:13.803065   31648 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1003 18:08:13.803072   31648 command_runner.go:130] > # stream_port = "0"
	I1003 18:08:13.803083   31648 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1003 18:08:13.803090   31648 command_runner.go:130] > # stream_enable_tls = false
	I1003 18:08:13.803102   31648 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1003 18:08:13.803114   31648 command_runner.go:130] > # stream_idle_timeout = ""
	I1003 18:08:13.803124   31648 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1003 18:08:13.803136   31648 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1003 18:08:13.803146   31648 command_runner.go:130] > # stream_tls_cert = ""
	I1003 18:08:13.803156   31648 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1003 18:08:13.803166   31648 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1003 18:08:13.803175   31648 command_runner.go:130] > # stream_tls_key = ""
	I1003 18:08:13.803185   31648 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1003 18:08:13.803197   31648 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1003 18:08:13.803202   31648 command_runner.go:130] > # automatically pick up the changes.
	I1003 18:08:13.803207   31648 command_runner.go:130] > # stream_tls_ca = ""
	I1003 18:08:13.803271   31648 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1003 18:08:13.803286   31648 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1003 18:08:13.803296   31648 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1003 18:08:13.803308   31648 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1003 18:08:13.803318   31648 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1003 18:08:13.803331   31648 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1003 18:08:13.803338   31648 command_runner.go:130] > [crio.runtime]
	I1003 18:08:13.803350   31648 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1003 18:08:13.803358   31648 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1003 18:08:13.803367   31648 command_runner.go:130] > # "nofile=1024:2048"
	I1003 18:08:13.803378   31648 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1003 18:08:13.803388   31648 command_runner.go:130] > # default_ulimits = [
	I1003 18:08:13.803393   31648 command_runner.go:130] > # ]
	I1003 18:08:13.803403   31648 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1003 18:08:13.803409   31648 command_runner.go:130] > # no_pivot = false
	I1003 18:08:13.803422   31648 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1003 18:08:13.803432   31648 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1003 18:08:13.803444   31648 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1003 18:08:13.803455   31648 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1003 18:08:13.803462   31648 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1003 18:08:13.803473   31648 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1003 18:08:13.803482   31648 command_runner.go:130] > # conmon = ""
	I1003 18:08:13.803489   31648 command_runner.go:130] > # Cgroup setting for conmon
	I1003 18:08:13.803504   31648 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1003 18:08:13.803513   31648 command_runner.go:130] > conmon_cgroup = "pod"
	I1003 18:08:13.803523   31648 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1003 18:08:13.803534   31648 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1003 18:08:13.803545   31648 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1003 18:08:13.803554   31648 command_runner.go:130] > # conmon_env = [
	I1003 18:08:13.803560   31648 command_runner.go:130] > # ]
	I1003 18:08:13.803573   31648 command_runner.go:130] > # Additional environment variables to set for all the
	I1003 18:08:13.803583   31648 command_runner.go:130] > # containers. These are overridden if set in the
	I1003 18:08:13.803595   31648 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1003 18:08:13.803603   31648 command_runner.go:130] > # default_env = [
	I1003 18:08:13.803611   31648 command_runner.go:130] > # ]
	I1003 18:08:13.803620   31648 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1003 18:08:13.803635   31648 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1003 18:08:13.803644   31648 command_runner.go:130] > # selinux = false
	I1003 18:08:13.803657   31648 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1003 18:08:13.803681   31648 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1003 18:08:13.803693   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.803703   31648 command_runner.go:130] > # seccomp_profile = ""
	I1003 18:08:13.803714   31648 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1003 18:08:13.803725   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.803735   31648 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1003 18:08:13.803746   31648 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1003 18:08:13.803760   31648 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1003 18:08:13.803772   31648 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1003 18:08:13.803785   31648 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1003 18:08:13.803796   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.803803   31648 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1003 18:08:13.803817   31648 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1003 18:08:13.803827   31648 command_runner.go:130] > # the cgroup blockio controller.
	I1003 18:08:13.803833   31648 command_runner.go:130] > # blockio_config_file = ""
	I1003 18:08:13.803847   31648 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1003 18:08:13.803856   31648 command_runner.go:130] > # blockio parameters.
	I1003 18:08:13.803862   31648 command_runner.go:130] > # blockio_reload = false
	I1003 18:08:13.803869   31648 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1003 18:08:13.803877   31648 command_runner.go:130] > # irqbalance daemon.
	I1003 18:08:13.803883   31648 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1003 18:08:13.803890   31648 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1003 18:08:13.803906   31648 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1003 18:08:13.803916   31648 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1003 18:08:13.803925   31648 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1003 18:08:13.803933   31648 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1003 18:08:13.803939   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.803951   31648 command_runner.go:130] > # rdt_config_file = ""
	I1003 18:08:13.803958   31648 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1003 18:08:13.803970   31648 command_runner.go:130] > # cgroup_manager = "systemd"
	I1003 18:08:13.803987   31648 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1003 18:08:13.803998   31648 command_runner.go:130] > # separate_pull_cgroup = ""
	I1003 18:08:13.804008   31648 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1003 18:08:13.804017   31648 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1003 18:08:13.804026   31648 command_runner.go:130] > # will be added.
	I1003 18:08:13.804035   31648 command_runner.go:130] > # default_capabilities = [
	I1003 18:08:13.804043   31648 command_runner.go:130] > # 	"CHOWN",
	I1003 18:08:13.804050   31648 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1003 18:08:13.804055   31648 command_runner.go:130] > # 	"FSETID",
	I1003 18:08:13.804066   31648 command_runner.go:130] > # 	"FOWNER",
	I1003 18:08:13.804071   31648 command_runner.go:130] > # 	"SETGID",
	I1003 18:08:13.804087   31648 command_runner.go:130] > # 	"SETUID",
	I1003 18:08:13.804093   31648 command_runner.go:130] > # 	"SETPCAP",
	I1003 18:08:13.804097   31648 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1003 18:08:13.804102   31648 command_runner.go:130] > # 	"KILL",
	I1003 18:08:13.804105   31648 command_runner.go:130] > # ]
	I1003 18:08:13.804112   31648 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1003 18:08:13.804121   31648 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1003 18:08:13.804125   31648 command_runner.go:130] > # add_inheritable_capabilities = false
	I1003 18:08:13.804133   31648 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1003 18:08:13.804138   31648 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1003 18:08:13.804143   31648 command_runner.go:130] > default_sysctls = [
	I1003 18:08:13.804147   31648 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1003 18:08:13.804150   31648 command_runner.go:130] > ]
	I1003 18:08:13.804157   31648 command_runner.go:130] > # List of devices on the host that a
	I1003 18:08:13.804163   31648 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1003 18:08:13.804169   31648 command_runner.go:130] > # allowed_devices = [
	I1003 18:08:13.804173   31648 command_runner.go:130] > # 	"/dev/fuse",
	I1003 18:08:13.804178   31648 command_runner.go:130] > # 	"/dev/net/tun",
	I1003 18:08:13.804181   31648 command_runner.go:130] > # ]
	I1003 18:08:13.804188   31648 command_runner.go:130] > # List of additional devices. specified as
	I1003 18:08:13.804194   31648 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1003 18:08:13.804201   31648 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1003 18:08:13.804207   31648 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1003 18:08:13.804212   31648 command_runner.go:130] > # additional_devices = [
	I1003 18:08:13.804215   31648 command_runner.go:130] > # ]
	I1003 18:08:13.804222   31648 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1003 18:08:13.804226   31648 command_runner.go:130] > # cdi_spec_dirs = [
	I1003 18:08:13.804231   31648 command_runner.go:130] > # 	"/etc/cdi",
	I1003 18:08:13.804235   31648 command_runner.go:130] > # 	"/var/run/cdi",
	I1003 18:08:13.804237   31648 command_runner.go:130] > # ]
	I1003 18:08:13.804243   31648 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1003 18:08:13.804251   31648 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1003 18:08:13.804254   31648 command_runner.go:130] > # Defaults to false.
	I1003 18:08:13.804261   31648 command_runner.go:130] > # device_ownership_from_security_context = false
	I1003 18:08:13.804268   31648 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1003 18:08:13.804275   31648 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1003 18:08:13.804279   31648 command_runner.go:130] > # hooks_dir = [
	I1003 18:08:13.804286   31648 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1003 18:08:13.804290   31648 command_runner.go:130] > # ]
	I1003 18:08:13.804297   31648 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1003 18:08:13.804303   31648 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1003 18:08:13.804309   31648 command_runner.go:130] > # its default mounts from the following two files:
	I1003 18:08:13.804312   31648 command_runner.go:130] > #
	I1003 18:08:13.804320   31648 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1003 18:08:13.804326   31648 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1003 18:08:13.804333   31648 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1003 18:08:13.804336   31648 command_runner.go:130] > #
	I1003 18:08:13.804342   31648 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1003 18:08:13.804349   31648 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1003 18:08:13.804356   31648 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1003 18:08:13.804363   31648 command_runner.go:130] > #      only add mounts it finds in this file.
	I1003 18:08:13.804366   31648 command_runner.go:130] > #
	I1003 18:08:13.804372   31648 command_runner.go:130] > # default_mounts_file = ""
	I1003 18:08:13.804376   31648 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1003 18:08:13.804384   31648 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1003 18:08:13.804388   31648 command_runner.go:130] > # pids_limit = -1
	I1003 18:08:13.804396   31648 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1003 18:08:13.804401   31648 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1003 18:08:13.804409   31648 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1003 18:08:13.804417   31648 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1003 18:08:13.804422   31648 command_runner.go:130] > # log_size_max = -1
	I1003 18:08:13.804429   31648 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1003 18:08:13.804435   31648 command_runner.go:130] > # log_to_journald = false
	I1003 18:08:13.804441   31648 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1003 18:08:13.804447   31648 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1003 18:08:13.804451   31648 command_runner.go:130] > # Path to directory for container attach sockets.
	I1003 18:08:13.804458   31648 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1003 18:08:13.804463   31648 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1003 18:08:13.804469   31648 command_runner.go:130] > # bind_mount_prefix = ""
	I1003 18:08:13.804473   31648 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1003 18:08:13.804479   31648 command_runner.go:130] > # read_only = false
	I1003 18:08:13.804486   31648 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1003 18:08:13.804494   31648 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1003 18:08:13.804497   31648 command_runner.go:130] > # live configuration reload.
	I1003 18:08:13.804501   31648 command_runner.go:130] > # log_level = "info"
	I1003 18:08:13.804508   31648 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1003 18:08:13.804513   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.804519   31648 command_runner.go:130] > # log_filter = ""
	I1003 18:08:13.804524   31648 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1003 18:08:13.804532   31648 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1003 18:08:13.804535   31648 command_runner.go:130] > # separated by comma.
	I1003 18:08:13.804544   31648 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1003 18:08:13.804551   31648 command_runner.go:130] > # uid_mappings = ""
	I1003 18:08:13.804557   31648 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1003 18:08:13.804564   31648 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1003 18:08:13.804569   31648 command_runner.go:130] > # separated by comma.
	I1003 18:08:13.804578   31648 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1003 18:08:13.804582   31648 command_runner.go:130] > # gid_mappings = ""
	I1003 18:08:13.804589   31648 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1003 18:08:13.804595   31648 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1003 18:08:13.804603   31648 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1003 18:08:13.804612   31648 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1003 18:08:13.804618   31648 command_runner.go:130] > # minimum_mappable_uid = -1
	I1003 18:08:13.804624   31648 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1003 18:08:13.804631   31648 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1003 18:08:13.804636   31648 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1003 18:08:13.804645   31648 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1003 18:08:13.804651   31648 command_runner.go:130] > # minimum_mappable_gid = -1
	I1003 18:08:13.804657   31648 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1003 18:08:13.804669   31648 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1003 18:08:13.804674   31648 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1003 18:08:13.804680   31648 command_runner.go:130] > # ctr_stop_timeout = 30
	I1003 18:08:13.804685   31648 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1003 18:08:13.804693   31648 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1003 18:08:13.804697   31648 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1003 18:08:13.804703   31648 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1003 18:08:13.804707   31648 command_runner.go:130] > # drop_infra_ctr = true
	I1003 18:08:13.804715   31648 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1003 18:08:13.804720   31648 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1003 18:08:13.804728   31648 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1003 18:08:13.804735   31648 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1003 18:08:13.804742   31648 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1003 18:08:13.804749   31648 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1003 18:08:13.804754   31648 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1003 18:08:13.804761   31648 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1003 18:08:13.804765   31648 command_runner.go:130] > # shared_cpuset = ""
	I1003 18:08:13.804773   31648 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1003 18:08:13.804777   31648 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1003 18:08:13.804783   31648 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1003 18:08:13.804789   31648 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1003 18:08:13.804795   31648 command_runner.go:130] > # pinns_path = ""
	I1003 18:08:13.804800   31648 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1003 18:08:13.804808   31648 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1003 18:08:13.804813   31648 command_runner.go:130] > # enable_criu_support = true
	I1003 18:08:13.804819   31648 command_runner.go:130] > # Enable/disable the generation of the container,
	I1003 18:08:13.804825   31648 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1003 18:08:13.804832   31648 command_runner.go:130] > # enable_pod_events = false
	I1003 18:08:13.804837   31648 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1003 18:08:13.804844   31648 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1003 18:08:13.804848   31648 command_runner.go:130] > # default_runtime = "crun"
	I1003 18:08:13.804855   31648 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1003 18:08:13.804862   31648 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1003 18:08:13.804874   31648 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1003 18:08:13.804881   31648 command_runner.go:130] > # creation as a file is not desired either.
	I1003 18:08:13.804889   31648 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1003 18:08:13.804896   31648 command_runner.go:130] > # the hostname is being managed dynamically.
	I1003 18:08:13.804900   31648 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1003 18:08:13.804905   31648 command_runner.go:130] > # ]
	I1003 18:08:13.804912   31648 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1003 18:08:13.804920   31648 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1003 18:08:13.804926   31648 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1003 18:08:13.804931   31648 command_runner.go:130] > # Each entry in the table should follow the format:
	I1003 18:08:13.804934   31648 command_runner.go:130] > #
	I1003 18:08:13.804941   31648 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1003 18:08:13.804945   31648 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1003 18:08:13.804952   31648 command_runner.go:130] > # runtime_type = "oci"
	I1003 18:08:13.804956   31648 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1003 18:08:13.804963   31648 command_runner.go:130] > # inherit_default_runtime = false
	I1003 18:08:13.804968   31648 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1003 18:08:13.804988   31648 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1003 18:08:13.804996   31648 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1003 18:08:13.805005   31648 command_runner.go:130] > # monitor_env = []
	I1003 18:08:13.805011   31648 command_runner.go:130] > # privileged_without_host_devices = false
	I1003 18:08:13.805017   31648 command_runner.go:130] > # allowed_annotations = []
	I1003 18:08:13.805022   31648 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1003 18:08:13.805028   31648 command_runner.go:130] > # no_sync_log = false
	I1003 18:08:13.805032   31648 command_runner.go:130] > # default_annotations = {}
	I1003 18:08:13.805038   31648 command_runner.go:130] > # stream_websockets = false
	I1003 18:08:13.805042   31648 command_runner.go:130] > # seccomp_profile = ""
	I1003 18:08:13.805062   31648 command_runner.go:130] > # Where:
	I1003 18:08:13.805069   31648 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1003 18:08:13.805075   31648 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1003 18:08:13.805081   31648 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1003 18:08:13.805089   31648 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1003 18:08:13.805092   31648 command_runner.go:130] > #   in $PATH.
	I1003 18:08:13.805100   31648 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1003 18:08:13.805105   31648 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1003 18:08:13.805112   31648 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1003 18:08:13.805115   31648 command_runner.go:130] > #   state.
	I1003 18:08:13.805121   31648 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1003 18:08:13.805128   31648 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1003 18:08:13.805133   31648 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1003 18:08:13.805141   31648 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1003 18:08:13.805146   31648 command_runner.go:130] > #   the values from the default runtime on load time.
	I1003 18:08:13.805153   31648 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1003 18:08:13.805158   31648 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1003 18:08:13.805165   31648 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1003 18:08:13.805177   31648 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1003 18:08:13.805183   31648 command_runner.go:130] > #   The currently recognized values are:
	I1003 18:08:13.805190   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1003 18:08:13.805199   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1003 18:08:13.805207   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1003 18:08:13.805214   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1003 18:08:13.805221   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1003 18:08:13.805229   31648 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1003 18:08:13.805235   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1003 18:08:13.805243   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1003 18:08:13.805251   31648 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1003 18:08:13.805257   31648 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1003 18:08:13.805265   31648 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1003 18:08:13.805273   31648 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1003 18:08:13.805278   31648 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1003 18:08:13.805285   31648 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1003 18:08:13.805291   31648 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1003 18:08:13.805300   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1003 18:08:13.805308   31648 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1003 18:08:13.805312   31648 command_runner.go:130] > #   deprecated option "conmon".
	I1003 18:08:13.805319   31648 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1003 18:08:13.805326   31648 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1003 18:08:13.805332   31648 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1003 18:08:13.805339   31648 command_runner.go:130] > #   should be moved to the container's cgroup
	I1003 18:08:13.805346   31648 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1003 18:08:13.805352   31648 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1003 18:08:13.805358   31648 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1003 18:08:13.805364   31648 command_runner.go:130] > #   conmon-rs by using:
	I1003 18:08:13.805370   31648 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1003 18:08:13.805379   31648 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1003 18:08:13.805388   31648 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1003 18:08:13.805395   31648 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1003 18:08:13.805401   31648 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1003 18:08:13.805415   31648 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1003 18:08:13.805423   31648 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1003 18:08:13.805430   31648 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1003 18:08:13.805437   31648 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1003 18:08:13.805449   31648 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1003 18:08:13.805455   31648 command_runner.go:130] > #   when a machine crash happens.
	I1003 18:08:13.805462   31648 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1003 18:08:13.805471   31648 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1003 18:08:13.805480   31648 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1003 18:08:13.805485   31648 command_runner.go:130] > #   seccomp profile for the runtime.
	I1003 18:08:13.805491   31648 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1003 18:08:13.805499   31648 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1003 18:08:13.805504   31648 command_runner.go:130] > #
	I1003 18:08:13.805508   31648 command_runner.go:130] > # Using the seccomp notifier feature:
	I1003 18:08:13.805513   31648 command_runner.go:130] > #
	I1003 18:08:13.805518   31648 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1003 18:08:13.805528   31648 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1003 18:08:13.805533   31648 command_runner.go:130] > #
	I1003 18:08:13.805539   31648 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1003 18:08:13.805547   31648 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1003 18:08:13.805549   31648 command_runner.go:130] > #
	I1003 18:08:13.805555   31648 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1003 18:08:13.805560   31648 command_runner.go:130] > # feature.
	I1003 18:08:13.805563   31648 command_runner.go:130] > #
	I1003 18:08:13.805568   31648 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1003 18:08:13.805576   31648 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1003 18:08:13.805582   31648 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1003 18:08:13.805589   31648 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1003 18:08:13.805595   31648 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1003 18:08:13.805600   31648 command_runner.go:130] > #
	I1003 18:08:13.805605   31648 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1003 18:08:13.805614   31648 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1003 18:08:13.805619   31648 command_runner.go:130] > #
	I1003 18:08:13.805625   31648 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1003 18:08:13.805632   31648 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1003 18:08:13.805635   31648 command_runner.go:130] > #
	I1003 18:08:13.805641   31648 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1003 18:08:13.805649   31648 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1003 18:08:13.805652   31648 command_runner.go:130] > # limitation.
	I1003 18:08:13.805656   31648 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1003 18:08:13.805666   31648 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1003 18:08:13.805671   31648 command_runner.go:130] > runtime_type = ""
	I1003 18:08:13.805675   31648 command_runner.go:130] > runtime_root = "/run/crun"
	I1003 18:08:13.805679   31648 command_runner.go:130] > inherit_default_runtime = false
	I1003 18:08:13.805683   31648 command_runner.go:130] > runtime_config_path = ""
	I1003 18:08:13.805689   31648 command_runner.go:130] > container_min_memory = ""
	I1003 18:08:13.805694   31648 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1003 18:08:13.805700   31648 command_runner.go:130] > monitor_cgroup = "pod"
	I1003 18:08:13.805704   31648 command_runner.go:130] > monitor_exec_cgroup = ""
	I1003 18:08:13.805710   31648 command_runner.go:130] > allowed_annotations = [
	I1003 18:08:13.805714   31648 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1003 18:08:13.805718   31648 command_runner.go:130] > ]
	I1003 18:08:13.805722   31648 command_runner.go:130] > privileged_without_host_devices = false
	I1003 18:08:13.805728   31648 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1003 18:08:13.805733   31648 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1003 18:08:13.805738   31648 command_runner.go:130] > runtime_type = ""
	I1003 18:08:13.805742   31648 command_runner.go:130] > runtime_root = "/run/runc"
	I1003 18:08:13.805748   31648 command_runner.go:130] > inherit_default_runtime = false
	I1003 18:08:13.805751   31648 command_runner.go:130] > runtime_config_path = ""
	I1003 18:08:13.805758   31648 command_runner.go:130] > container_min_memory = ""
	I1003 18:08:13.805762   31648 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1003 18:08:13.805767   31648 command_runner.go:130] > monitor_cgroup = "pod"
	I1003 18:08:13.805771   31648 command_runner.go:130] > monitor_exec_cgroup = ""
	I1003 18:08:13.805778   31648 command_runner.go:130] > privileged_without_host_devices = false
	I1003 18:08:13.805784   31648 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1003 18:08:13.805790   31648 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1003 18:08:13.805796   31648 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1003 18:08:13.805805   31648 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1003 18:08:13.805817   31648 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1003 18:08:13.805828   31648 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1003 18:08:13.805837   31648 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1003 18:08:13.805842   31648 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1003 18:08:13.805852   31648 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1003 18:08:13.805860   31648 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1003 18:08:13.805867   31648 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1003 18:08:13.805873   31648 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1003 18:08:13.805878   31648 command_runner.go:130] > # Example:
	I1003 18:08:13.805882   31648 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1003 18:08:13.805886   31648 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1003 18:08:13.805893   31648 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1003 18:08:13.805899   31648 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1003 18:08:13.805903   31648 command_runner.go:130] > # cpuset = "0-1"
	I1003 18:08:13.805906   31648 command_runner.go:130] > # cpushares = "5"
	I1003 18:08:13.805910   31648 command_runner.go:130] > # cpuquota = "1000"
	I1003 18:08:13.805919   31648 command_runner.go:130] > # cpuperiod = "100000"
	I1003 18:08:13.805924   31648 command_runner.go:130] > # cpulimit = "35"
	I1003 18:08:13.805933   31648 command_runner.go:130] > # Where:
	I1003 18:08:13.805940   31648 command_runner.go:130] > # The workload name is workload-type.
	I1003 18:08:13.805950   31648 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1003 18:08:13.805955   31648 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1003 18:08:13.805960   31648 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1003 18:08:13.805971   31648 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1003 18:08:13.805994   31648 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1003 18:08:13.806006   31648 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1003 18:08:13.806019   31648 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1003 18:08:13.806027   31648 command_runner.go:130] > # Default value is set to true
	I1003 18:08:13.806031   31648 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1003 18:08:13.806036   31648 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1003 18:08:13.806040   31648 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1003 18:08:13.806047   31648 command_runner.go:130] > # Default value is set to 'false'
	I1003 18:08:13.806052   31648 command_runner.go:130] > # disable_hostport_mapping = false
	I1003 18:08:13.806057   31648 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1003 18:08:13.806066   31648 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1003 18:08:13.806074   31648 command_runner.go:130] > # timezone = ""
	I1003 18:08:13.806085   31648 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1003 18:08:13.806093   31648 command_runner.go:130] > #
	I1003 18:08:13.806105   31648 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1003 18:08:13.806116   31648 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1003 18:08:13.806122   31648 command_runner.go:130] > [crio.image]
	I1003 18:08:13.806127   31648 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1003 18:08:13.806134   31648 command_runner.go:130] > # default_transport = "docker://"
	I1003 18:08:13.806139   31648 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1003 18:08:13.806147   31648 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1003 18:08:13.806154   31648 command_runner.go:130] > # global_auth_file = ""
	I1003 18:08:13.806159   31648 command_runner.go:130] > # The image used to instantiate infra containers.
	I1003 18:08:13.806165   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.806170   31648 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1003 18:08:13.806178   31648 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1003 18:08:13.806185   31648 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1003 18:08:13.806190   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.806196   31648 command_runner.go:130] > # pause_image_auth_file = ""
	I1003 18:08:13.806202   31648 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1003 18:08:13.806209   31648 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1003 18:08:13.806215   31648 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1003 18:08:13.806220   31648 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1003 18:08:13.806226   31648 command_runner.go:130] > # pause_command = "/pause"
	I1003 18:08:13.806231   31648 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1003 18:08:13.806239   31648 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1003 18:08:13.806244   31648 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1003 18:08:13.806252   31648 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1003 18:08:13.806257   31648 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1003 18:08:13.806264   31648 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1003 18:08:13.806268   31648 command_runner.go:130] > # pinned_images = [
	I1003 18:08:13.806271   31648 command_runner.go:130] > # ]
	I1003 18:08:13.806278   31648 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1003 18:08:13.806286   31648 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1003 18:08:13.806293   31648 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1003 18:08:13.806301   31648 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1003 18:08:13.806306   31648 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1003 18:08:13.806312   31648 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1003 18:08:13.806318   31648 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1003 18:08:13.806325   31648 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1003 18:08:13.806333   31648 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1003 18:08:13.806341   31648 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1003 18:08:13.806347   31648 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1003 18:08:13.806353   31648 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1003 18:08:13.806358   31648 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1003 18:08:13.806366   31648 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1003 18:08:13.806369   31648 command_runner.go:130] > # changing them here.
	I1003 18:08:13.806374   31648 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1003 18:08:13.806380   31648 command_runner.go:130] > # insecure_registries = [
	I1003 18:08:13.806383   31648 command_runner.go:130] > # ]
	I1003 18:08:13.806391   31648 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1003 18:08:13.806398   31648 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1003 18:08:13.806404   31648 command_runner.go:130] > # image_volumes = "mkdir"
	I1003 18:08:13.806409   31648 command_runner.go:130] > # Temporary directory to use for storing big files
	I1003 18:08:13.806415   31648 command_runner.go:130] > # big_files_temporary_dir = ""
	I1003 18:08:13.806420   31648 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1003 18:08:13.806429   31648 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1003 18:08:13.806435   31648 command_runner.go:130] > # auto_reload_registries = false
	I1003 18:08:13.806441   31648 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1003 18:08:13.806450   31648 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1003 18:08:13.806467   31648 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1003 18:08:13.806473   31648 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1003 18:08:13.806477   31648 command_runner.go:130] > # The mode of short name resolution.
	I1003 18:08:13.806484   31648 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1003 18:08:13.806492   31648 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1003 18:08:13.806499   31648 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1003 18:08:13.806503   31648 command_runner.go:130] > # short_name_mode = "enforcing"
	I1003 18:08:13.806511   31648 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1003 18:08:13.806518   31648 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1003 18:08:13.806523   31648 command_runner.go:130] > # oci_artifact_mount_support = true
	I1003 18:08:13.806530   31648 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1003 18:08:13.806535   31648 command_runner.go:130] > # CNI plugins.
	I1003 18:08:13.806541   31648 command_runner.go:130] > [crio.network]
	I1003 18:08:13.806546   31648 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1003 18:08:13.806553   31648 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1003 18:08:13.806557   31648 command_runner.go:130] > # cni_default_network = ""
	I1003 18:08:13.806562   31648 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1003 18:08:13.806568   31648 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1003 18:08:13.806573   31648 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1003 18:08:13.806580   31648 command_runner.go:130] > # plugin_dirs = [
	I1003 18:08:13.806584   31648 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1003 18:08:13.806589   31648 command_runner.go:130] > # ]
	I1003 18:08:13.806593   31648 command_runner.go:130] > # List of included pod metrics.
	I1003 18:08:13.806599   31648 command_runner.go:130] > # included_pod_metrics = [
	I1003 18:08:13.806603   31648 command_runner.go:130] > # ]
	I1003 18:08:13.806610   31648 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1003 18:08:13.806614   31648 command_runner.go:130] > [crio.metrics]
	I1003 18:08:13.806618   31648 command_runner.go:130] > # Globally enable or disable metrics support.
	I1003 18:08:13.806624   31648 command_runner.go:130] > # enable_metrics = false
	I1003 18:08:13.806629   31648 command_runner.go:130] > # Specify enabled metrics collectors.
	I1003 18:08:13.806635   31648 command_runner.go:130] > # Per default all metrics are enabled.
	I1003 18:08:13.806640   31648 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1003 18:08:13.806647   31648 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1003 18:08:13.806654   31648 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1003 18:08:13.806662   31648 command_runner.go:130] > # metrics_collectors = [
	I1003 18:08:13.806668   31648 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1003 18:08:13.806672   31648 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1003 18:08:13.806676   31648 command_runner.go:130] > # 	"containers_oom_total",
	I1003 18:08:13.806679   31648 command_runner.go:130] > # 	"processes_defunct",
	I1003 18:08:13.806682   31648 command_runner.go:130] > # 	"operations_total",
	I1003 18:08:13.806687   31648 command_runner.go:130] > # 	"operations_latency_seconds",
	I1003 18:08:13.806691   31648 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1003 18:08:13.806694   31648 command_runner.go:130] > # 	"operations_errors_total",
	I1003 18:08:13.806697   31648 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1003 18:08:13.806701   31648 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1003 18:08:13.806705   31648 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1003 18:08:13.806709   31648 command_runner.go:130] > # 	"image_pulls_success_total",
	I1003 18:08:13.806713   31648 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1003 18:08:13.806716   31648 command_runner.go:130] > # 	"containers_oom_count_total",
	I1003 18:08:13.806720   31648 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1003 18:08:13.806724   31648 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1003 18:08:13.806728   31648 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1003 18:08:13.806730   31648 command_runner.go:130] > # ]
	I1003 18:08:13.806736   31648 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1003 18:08:13.806739   31648 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1003 18:08:13.806744   31648 command_runner.go:130] > # The port on which the metrics server will listen.
	I1003 18:08:13.806747   31648 command_runner.go:130] > # metrics_port = 9090
	I1003 18:08:13.806751   31648 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1003 18:08:13.806755   31648 command_runner.go:130] > # metrics_socket = ""
	I1003 18:08:13.806759   31648 command_runner.go:130] > # The certificate for the secure metrics server.
	I1003 18:08:13.806765   31648 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1003 18:08:13.806770   31648 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1003 18:08:13.806774   31648 command_runner.go:130] > # certificate on any modification event.
	I1003 18:08:13.806780   31648 command_runner.go:130] > # metrics_cert = ""
	I1003 18:08:13.806785   31648 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1003 18:08:13.806791   31648 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1003 18:08:13.806795   31648 command_runner.go:130] > # metrics_key = ""
	I1003 18:08:13.806802   31648 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1003 18:08:13.806805   31648 command_runner.go:130] > [crio.tracing]
	I1003 18:08:13.806810   31648 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1003 18:08:13.806816   31648 command_runner.go:130] > # enable_tracing = false
	I1003 18:08:13.806821   31648 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1003 18:08:13.806827   31648 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1003 18:08:13.806834   31648 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1003 18:08:13.806841   31648 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1003 18:08:13.806845   31648 command_runner.go:130] > # CRI-O NRI configuration.
	I1003 18:08:13.806850   31648 command_runner.go:130] > [crio.nri]
	I1003 18:08:13.806854   31648 command_runner.go:130] > # Globally enable or disable NRI.
	I1003 18:08:13.806860   31648 command_runner.go:130] > # enable_nri = true
	I1003 18:08:13.806864   31648 command_runner.go:130] > # NRI socket to listen on.
	I1003 18:08:13.806870   31648 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1003 18:08:13.806874   31648 command_runner.go:130] > # NRI plugin directory to use.
	I1003 18:08:13.806880   31648 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1003 18:08:13.806885   31648 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1003 18:08:13.806891   31648 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1003 18:08:13.806896   31648 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1003 18:08:13.806926   31648 command_runner.go:130] > # nri_disable_connections = false
	I1003 18:08:13.806934   31648 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1003 18:08:13.806938   31648 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1003 18:08:13.806944   31648 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1003 18:08:13.806948   31648 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1003 18:08:13.806955   31648 command_runner.go:130] > # NRI default validator configuration.
	I1003 18:08:13.806961   31648 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1003 18:08:13.806968   31648 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1003 18:08:13.806972   31648 command_runner.go:130] > # can be restricted/rejected:
	I1003 18:08:13.806990   31648 command_runner.go:130] > # - OCI hook injection
	I1003 18:08:13.806998   31648 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1003 18:08:13.807007   31648 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1003 18:08:13.807014   31648 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1003 18:08:13.807024   31648 command_runner.go:130] > # - adjustment of linux namespaces
	I1003 18:08:13.807033   31648 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1003 18:08:13.807041   31648 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1003 18:08:13.807046   31648 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1003 18:08:13.807051   31648 command_runner.go:130] > #
	I1003 18:08:13.807055   31648 command_runner.go:130] > # [crio.nri.default_validator]
	I1003 18:08:13.807060   31648 command_runner.go:130] > # nri_enable_default_validator = false
	I1003 18:08:13.807067   31648 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1003 18:08:13.807072   31648 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1003 18:08:13.807079   31648 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1003 18:08:13.807083   31648 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1003 18:08:13.807088   31648 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1003 18:08:13.807094   31648 command_runner.go:130] > # nri_validator_required_plugins = [
	I1003 18:08:13.807097   31648 command_runner.go:130] > # ]
	I1003 18:08:13.807104   31648 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1003 18:08:13.807109   31648 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1003 18:08:13.807115   31648 command_runner.go:130] > [crio.stats]
	I1003 18:08:13.807121   31648 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1003 18:08:13.807128   31648 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1003 18:08:13.807132   31648 command_runner.go:130] > # stats_collection_period = 0
	I1003 18:08:13.807141   31648 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1003 18:08:13.807147   31648 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1003 18:08:13.807154   31648 command_runner.go:130] > # collection_period = 0
	I1003 18:08:13.807173   31648 command_runner.go:130] ! time="2025-10-03T18:08:13.78773481Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1003 18:08:13.807183   31648 command_runner.go:130] ! time="2025-10-03T18:08:13.787758775Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1003 18:08:13.807194   31648 command_runner.go:130] ! time="2025-10-03T18:08:13.787775454Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1003 18:08:13.807203   31648 command_runner.go:130] ! time="2025-10-03T18:08:13.78779273Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1003 18:08:13.807213   31648 command_runner.go:130] ! time="2025-10-03T18:08:13.7878475Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.807222   31648 command_runner.go:130] ! time="2025-10-03T18:08:13.788021357Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1003 18:08:13.807234   31648 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1003 18:08:13.807290   31648 cni.go:84] Creating CNI manager for ""
	I1003 18:08:13.807303   31648 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 18:08:13.807321   31648 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 18:08:13.807344   31648 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-889240 NodeName:functional-889240 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 18:08:13.807460   31648 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-889240"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:08:13.807513   31648 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 18:08:13.814815   31648 command_runner.go:130] > kubeadm
	I1003 18:08:13.814829   31648 command_runner.go:130] > kubectl
	I1003 18:08:13.814834   31648 command_runner.go:130] > kubelet
	I1003 18:08:13.815427   31648 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:08:13.815489   31648 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 18:08:13.822648   31648 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1003 18:08:13.834615   31648 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 18:08:13.846006   31648 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1003 18:08:13.857402   31648 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1003 18:08:13.860916   31648 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1003 18:08:13.860998   31648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:08:13.942536   31648 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:08:13.955386   31648 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240 for IP: 192.168.49.2
	I1003 18:08:13.955406   31648 certs.go:195] generating shared ca certs ...
	I1003 18:08:13.955424   31648 certs.go:227] acquiring lock for ca certs: {Name:mk92d1e8e469cb44d9924ff8abf5ecf0a8ce4e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:08:13.955571   31648 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key
	I1003 18:08:13.955642   31648 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key
	I1003 18:08:13.955660   31648 certs.go:257] generating profile certs ...
	I1003 18:08:13.955770   31648 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.key
	I1003 18:08:13.955933   31648 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.key.eb3f8f7c
	I1003 18:08:13.956034   31648 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.key
	I1003 18:08:13.956049   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 18:08:13.956072   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 18:08:13.956090   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 18:08:13.956107   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 18:08:13.956123   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 18:08:13.956140   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 18:08:13.956160   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 18:08:13.956185   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 18:08:13.956244   31648 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem (1338 bytes)
	W1003 18:08:13.956286   31648 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212_empty.pem, impossibly tiny 0 bytes
	I1003 18:08:13.956298   31648 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:08:13.956331   31648 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:08:13.956364   31648 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:08:13.956397   31648 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem (1675 bytes)
	I1003 18:08:13.956451   31648 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:08:13.956487   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem -> /usr/share/ca-certificates/12212.pem
	I1003 18:08:13.956507   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /usr/share/ca-certificates/122122.pem
	I1003 18:08:13.956528   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:08:13.957144   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:08:13.973779   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 18:08:13.990161   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:08:14.006157   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:08:14.022253   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 18:08:14.038198   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:08:14.054095   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:08:14.069959   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 18:08:14.085810   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem --> /usr/share/ca-certificates/12212.pem (1338 bytes)
	I1003 18:08:14.101812   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /usr/share/ca-certificates/122122.pem (1708 bytes)
	I1003 18:08:14.117716   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:08:14.134093   31648 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:08:14.145835   31648 ssh_runner.go:195] Run: openssl version
	I1003 18:08:14.151369   31648 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1003 18:08:14.151660   31648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122122.pem && ln -fs /usr/share/ca-certificates/122122.pem /etc/ssl/certs/122122.pem"
	I1003 18:08:14.160011   31648 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122122.pem
	I1003 18:08:14.163572   31648 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:08:14.163595   31648 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:08:14.163631   31648 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122122.pem
	I1003 18:08:14.196823   31648 command_runner.go:130] > 3ec20f2e
	I1003 18:08:14.197073   31648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122122.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:08:14.204835   31648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:08:14.212908   31648 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:08:14.216400   31648 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:08:14.216425   31648 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:08:14.216454   31648 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:08:14.249946   31648 command_runner.go:130] > b5213941
	I1003 18:08:14.250032   31648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:08:14.257940   31648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12212.pem && ln -fs /usr/share/ca-certificates/12212.pem /etc/ssl/certs/12212.pem"
	I1003 18:08:14.266302   31648 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12212.pem
	I1003 18:08:14.269939   31648 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:08:14.269964   31648 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:08:14.270013   31648 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12212.pem
	I1003 18:08:14.303247   31648 command_runner.go:130] > 51391683
	I1003 18:08:14.303479   31648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12212.pem /etc/ssl/certs/51391683.0"
	I1003 18:08:14.311263   31648 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:08:14.314772   31648 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:08:14.314798   31648 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1003 18:08:14.314807   31648 command_runner.go:130] > Device: 8,1	Inode: 579409      Links: 1
	I1003 18:08:14.314815   31648 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1003 18:08:14.314823   31648 command_runner.go:130] > Access: 2025-10-03 18:04:07.266428775 +0000
	I1003 18:08:14.314828   31648 command_runner.go:130] > Modify: 2025-10-03 18:00:02.305264452 +0000
	I1003 18:08:14.314842   31648 command_runner.go:130] > Change: 2025-10-03 18:00:02.305264452 +0000
	I1003 18:08:14.314851   31648 command_runner.go:130] >  Birth: 2025-10-03 18:00:02.305264452 +0000
	I1003 18:08:14.314920   31648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 18:08:14.349195   31648 command_runner.go:130] > Certificate will not expire
	I1003 18:08:14.349493   31648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 18:08:14.382820   31648 command_runner.go:130] > Certificate will not expire
	I1003 18:08:14.383063   31648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 18:08:14.416849   31648 command_runner.go:130] > Certificate will not expire
	I1003 18:08:14.416933   31648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 18:08:14.450508   31648 command_runner.go:130] > Certificate will not expire
	I1003 18:08:14.450572   31648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 18:08:14.483927   31648 command_runner.go:130] > Certificate will not expire
	I1003 18:08:14.484012   31648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 18:08:14.517658   31648 command_runner.go:130] > Certificate will not expire
	I1003 18:08:14.518008   31648 kubeadm.go:400] StartCluster: {Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:08:14.518097   31648 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:08:14.518174   31648 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:08:14.544326   31648 cri.go:89] found id: ""
	I1003 18:08:14.544381   31648 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:08:14.551440   31648 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1003 18:08:14.551457   31648 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1003 18:08:14.551463   31648 command_runner.go:130] > /var/lib/minikube/etcd:
	I1003 18:08:14.551962   31648 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1003 18:08:14.551995   31648 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1003 18:08:14.552044   31648 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 18:08:14.559024   31648 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:08:14.559104   31648 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-889240" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:08:14.559135   31648 kubeconfig.go:62] /home/jenkins/minikube-integration/21625-8669/kubeconfig needs updating (will repair): [kubeconfig missing "functional-889240" cluster setting kubeconfig missing "functional-889240" context setting]
	I1003 18:08:14.559426   31648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/kubeconfig: {Name:mk6b7939515483ba69c1f358a3a21494f4ead7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:08:14.562686   31648 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:08:14.562840   31648 kapi.go:59] client config for functional-889240: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.key", CAFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 18:08:14.563280   31648 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1003 18:08:14.563295   31648 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1003 18:08:14.563300   31648 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1003 18:08:14.563305   31648 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1003 18:08:14.563310   31648 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1003 18:08:14.563344   31648 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1003 18:08:14.563668   31648 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 18:08:14.571379   31648 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1003 18:08:14.571411   31648 kubeadm.go:601] duration metric: took 19.407047ms to restartPrimaryControlPlane
	I1003 18:08:14.571423   31648 kubeadm.go:402] duration metric: took 53.42211ms to StartCluster
	I1003 18:08:14.571440   31648 settings.go:142] acquiring lock: {Name:mk6bc950503a8f341b8aacc07a8bc72d5db3a25c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:08:14.571546   31648 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:08:14.572080   31648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/kubeconfig: {Name:mk6b7939515483ba69c1f358a3a21494f4ead7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:08:14.572261   31648 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 18:08:14.572328   31648 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 18:08:14.572418   31648 addons.go:69] Setting storage-provisioner=true in profile "functional-889240"
	I1003 18:08:14.572440   31648 addons.go:238] Setting addon storage-provisioner=true in "functional-889240"
	I1003 18:08:14.572443   31648 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:08:14.572454   31648 addons.go:69] Setting default-storageclass=true in profile "functional-889240"
	I1003 18:08:14.572472   31648 host.go:66] Checking if "functional-889240" exists ...
	I1003 18:08:14.572481   31648 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-889240"
	I1003 18:08:14.572708   31648 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
	I1003 18:08:14.572822   31648 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
	I1003 18:08:14.574934   31648 out.go:179] * Verifying Kubernetes components...
	I1003 18:08:14.575948   31648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:08:14.591352   31648 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:08:14.591562   31648 kapi.go:59] client config for functional-889240: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.key", CAFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 18:08:14.591895   31648 addons.go:238] Setting addon default-storageclass=true in "functional-889240"
	I1003 18:08:14.591927   31648 host.go:66] Checking if "functional-889240" exists ...
	I1003 18:08:14.592300   31648 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
	I1003 18:08:14.592939   31648 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 18:08:14.594638   31648 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:14.594655   31648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 18:08:14.594693   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:14.617423   31648 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:14.617446   31648 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 18:08:14.617507   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:14.620273   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:14.639039   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:14.672807   31648 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:08:14.684788   31648 node_ready.go:35] waiting up to 6m0s for node "functional-889240" to be "Ready" ...
	I1003 18:08:14.684921   31648 type.go:168] "Request Body" body=""
	I1003 18:08:14.685003   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:14.685252   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:14.730950   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:14.745066   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:14.786328   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:14.786378   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:14.786409   31648 retry.go:31] will retry after 270.951246ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:14.798186   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:14.798232   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:14.798258   31648 retry.go:31] will retry after 360.152106ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.057602   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:15.106841   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:15.109109   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.109138   31648 retry.go:31] will retry after 397.537911ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.159331   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:15.185817   31648 type.go:168] "Request Body" body=""
	I1003 18:08:15.185883   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:15.186219   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:15.210176   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:15.210221   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.210238   31648 retry.go:31] will retry after 493.012433ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.507675   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:15.555577   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:15.557666   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.557696   31648 retry.go:31] will retry after 440.122822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.685949   31648 type.go:168] "Request Body" body=""
	I1003 18:08:15.686038   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:15.686370   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:15.703496   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:15.753710   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:15.753758   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.753776   31648 retry.go:31] will retry after 795.152031ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.998073   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:16.047743   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:16.047782   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:16.047802   31648 retry.go:31] will retry after 705.62402ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:16.185279   31648 type.go:168] "Request Body" body=""
	I1003 18:08:16.185360   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:16.185691   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:16.549101   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:16.597196   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:16.599345   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:16.599377   31648 retry.go:31] will retry after 940.255489ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:16.685633   31648 type.go:168] "Request Body" body=""
	I1003 18:08:16.685701   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:16.685999   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:16.686058   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:16.754204   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:16.801452   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:16.803457   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:16.803489   31648 retry.go:31] will retry after 1.24021873s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:17.184970   31648 type.go:168] "Request Body" body=""
	I1003 18:08:17.185063   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:17.185424   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:17.539832   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:17.590758   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:17.590802   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:17.590823   31648 retry.go:31] will retry after 1.395425458s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:17.685012   31648 type.go:168] "Request Body" body=""
	I1003 18:08:17.685095   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:17.685454   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:18.043958   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:18.094735   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:18.094776   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:18.094793   31648 retry.go:31] will retry after 1.596032935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:18.185003   31648 type.go:168] "Request Body" body=""
	I1003 18:08:18.185100   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:18.185407   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:18.685017   31648 type.go:168] "Request Body" body=""
	I1003 18:08:18.685100   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:18.685393   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:18.986876   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:19.035593   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:19.038332   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:19.038363   31648 retry.go:31] will retry after 1.200373965s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:19.185671   31648 type.go:168] "Request Body" body=""
	I1003 18:08:19.185764   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:19.186105   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:19.186155   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:19.686009   31648 type.go:168] "Request Body" body=""
	I1003 18:08:19.686091   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:19.686423   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:19.691557   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:19.741190   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:19.743532   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:19.743567   31648 retry.go:31] will retry after 3.569328126s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:20.185118   31648 type.go:168] "Request Body" body=""
	I1003 18:08:20.185184   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:20.185523   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:20.239734   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:20.289529   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:20.291706   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:20.291741   31648 retry.go:31] will retry after 1.81500567s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:20.685251   31648 type.go:168] "Request Body" body=""
	I1003 18:08:20.685325   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:20.685635   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:21.185510   31648 type.go:168] "Request Body" body=""
	I1003 18:08:21.185583   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:21.185888   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:21.685727   31648 type.go:168] "Request Body" body=""
	I1003 18:08:21.685836   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:21.686208   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:21.686275   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:22.107768   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:22.158032   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:22.158081   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:22.158100   31648 retry.go:31] will retry after 3.676335527s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:22.185231   31648 type.go:168] "Request Body" body=""
	I1003 18:08:22.185319   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:22.185614   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:22.685370   31648 type.go:168] "Request Body" body=""
	I1003 18:08:22.685451   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:22.685806   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:23.185639   31648 type.go:168] "Request Body" body=""
	I1003 18:08:23.185743   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:23.186048   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:23.313354   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:23.364461   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:23.364519   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:23.364543   31648 retry.go:31] will retry after 3.926696561s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:23.685958   31648 type.go:168] "Request Body" body=""
	I1003 18:08:23.686044   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:23.686339   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:23.686396   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:24.186039   31648 type.go:168] "Request Body" body=""
	I1003 18:08:24.186135   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:24.186455   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:24.685152   31648 type.go:168] "Request Body" body=""
	I1003 18:08:24.685228   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:24.685576   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:25.185310   31648 type.go:168] "Request Body" body=""
	I1003 18:08:25.185375   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:25.185715   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:25.685392   31648 type.go:168] "Request Body" body=""
	I1003 18:08:25.685465   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:25.685774   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:25.835120   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:25.883846   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:25.886330   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:25.886360   31648 retry.go:31] will retry after 9.086319041s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:26.185864   31648 type.go:168] "Request Body" body=""
	I1003 18:08:26.185950   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:26.186312   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:26.186362   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:26.685071   31648 type.go:168] "Request Body" body=""
	I1003 18:08:26.685149   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:26.685486   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:27.185231   31648 type.go:168] "Request Body" body=""
	I1003 18:08:27.185303   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:27.185670   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:27.291951   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:27.344646   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:27.344705   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:27.344728   31648 retry.go:31] will retry after 9.233335187s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:27.685027   31648 type.go:168] "Request Body" body=""
	I1003 18:08:27.685131   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:27.685438   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:28.185051   31648 type.go:168] "Request Body" body=""
	I1003 18:08:28.185123   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:28.185416   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:28.685061   31648 type.go:168] "Request Body" body=""
	I1003 18:08:28.685136   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:28.685436   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:28.685488   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:29.185050   31648 type.go:168] "Request Body" body=""
	I1003 18:08:29.185116   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:29.185410   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:29.685011   31648 type.go:168] "Request Body" body=""
	I1003 18:08:29.685107   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:29.685414   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:30.185028   31648 type.go:168] "Request Body" body=""
	I1003 18:08:30.185114   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:30.185401   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:30.685020   31648 type.go:168] "Request Body" body=""
	I1003 18:08:30.685097   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:30.685428   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:31.185273   31648 type.go:168] "Request Body" body=""
	I1003 18:08:31.185345   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:31.185680   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:31.185733   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:31.685419   31648 type.go:168] "Request Body" body=""
	I1003 18:08:31.685507   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:31.685800   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:32.185743   31648 type.go:168] "Request Body" body=""
	I1003 18:08:32.185852   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:32.186217   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:32.684952   31648 type.go:168] "Request Body" body=""
	I1003 18:08:32.685038   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:32.685332   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:33.185084   31648 type.go:168] "Request Body" body=""
	I1003 18:08:33.185176   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:33.185536   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:33.685288   31648 type.go:168] "Request Body" body=""
	I1003 18:08:33.685369   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:33.685664   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:33.685725   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:34.185445   31648 type.go:168] "Request Body" body=""
	I1003 18:08:34.185522   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:34.185879   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:34.685599   31648 type.go:168] "Request Body" body=""
	I1003 18:08:34.685698   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:34.686052   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:34.973491   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:35.025995   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:35.026042   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:35.026060   31648 retry.go:31] will retry after 13.835197481s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:35.185336   31648 type.go:168] "Request Body" body=""
	I1003 18:08:35.185419   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:35.185713   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:35.685344   31648 type.go:168] "Request Body" body=""
	I1003 18:08:35.685434   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:35.685770   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:35.685857   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:36.185648   31648 type.go:168] "Request Body" body=""
	I1003 18:08:36.185719   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:36.186013   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:36.578491   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:36.629045   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:36.629094   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:36.629123   31648 retry.go:31] will retry after 7.439097167s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:36.685279   31648 type.go:168] "Request Body" body=""
	I1003 18:08:36.685356   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:36.685671   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:37.185440   31648 type.go:168] "Request Body" body=""
	I1003 18:08:37.185503   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:37.185805   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:37.685609   31648 type.go:168] "Request Body" body=""
	I1003 18:08:37.685705   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:37.686055   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:37.686118   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:38.185875   31648 type.go:168] "Request Body" body=""
	I1003 18:08:38.185966   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:38.186273   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:38.685047   31648 type.go:168] "Request Body" body=""
	I1003 18:08:38.685111   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:38.685422   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:39.185132   31648 type.go:168] "Request Body" body=""
	I1003 18:08:39.185219   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:39.185524   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:39.685244   31648 type.go:168] "Request Body" body=""
	I1003 18:08:39.685308   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:39.685620   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:40.185346   31648 type.go:168] "Request Body" body=""
	I1003 18:08:40.185409   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:40.185703   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:40.185782   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:40.685452   31648 type.go:168] "Request Body" body=""
	I1003 18:08:40.685560   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:40.685889   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:41.185504   31648 type.go:168] "Request Body" body=""
	I1003 18:08:41.185583   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:41.185889   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:41.685695   31648 type.go:168] "Request Body" body=""
	I1003 18:08:41.685767   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:41.686090   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:42.185782   31648 type.go:168] "Request Body" body=""
	I1003 18:08:42.185862   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:42.186224   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:42.186281   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:42.685859   31648 type.go:168] "Request Body" body=""
	I1003 18:08:42.685952   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:42.686271   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:43.185893   31648 type.go:168] "Request Body" body=""
	I1003 18:08:43.185999   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:43.186296   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:43.685944   31648 type.go:168] "Request Body" body=""
	I1003 18:08:43.686017   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:43.686309   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:44.068807   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:44.118932   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:44.118993   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:44.119018   31648 retry.go:31] will retry after 11.649333138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:44.185207   31648 type.go:168] "Request Body" body=""
	I1003 18:08:44.185271   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:44.185562   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:44.685354   31648 type.go:168] "Request Body" body=""
	I1003 18:08:44.685421   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:44.685759   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:44.685811   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:45.185341   31648 type.go:168] "Request Body" body=""
	I1003 18:08:45.185433   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:45.185739   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:45.685457   31648 type.go:168] "Request Body" body=""
	I1003 18:08:45.685529   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:45.685878   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:46.185715   31648 type.go:168] "Request Body" body=""
	I1003 18:08:46.185814   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:46.186178   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:46.685956   31648 type.go:168] "Request Body" body=""
	I1003 18:08:46.686040   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:46.686342   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:46.686417   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:47.185108   31648 type.go:168] "Request Body" body=""
	I1003 18:08:47.185173   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:47.185454   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:47.685185   31648 type.go:168] "Request Body" body=""
	I1003 18:08:47.685263   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:47.685629   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:48.185337   31648 type.go:168] "Request Body" body=""
	I1003 18:08:48.185401   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:48.185716   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:48.685423   31648 type.go:168] "Request Body" body=""
	I1003 18:08:48.685491   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:48.685791   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:48.862137   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:48.911551   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:48.911612   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:48.911635   31648 retry.go:31] will retry after 10.230842759s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:49.184986   31648 type.go:168] "Request Body" body=""
	I1003 18:08:49.185056   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:49.185386   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:49.185450   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:49.685132   31648 type.go:168] "Request Body" body=""
	I1003 18:08:49.685197   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:49.685528   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:50.185253   31648 type.go:168] "Request Body" body=""
	I1003 18:08:50.185319   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:50.185649   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:50.685352   31648 type.go:168] "Request Body" body=""
	I1003 18:08:50.685456   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:50.685777   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:51.185614   31648 type.go:168] "Request Body" body=""
	I1003 18:08:51.185727   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:51.186089   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:51.186142   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:51.685865   31648 type.go:168] "Request Body" body=""
	I1003 18:08:51.685970   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:51.686292   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:52.185039   31648 type.go:168] "Request Body" body=""
	I1003 18:08:52.185145   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:52.185488   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:52.685238   31648 type.go:168] "Request Body" body=""
	I1003 18:08:52.685302   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:52.685617   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:53.185313   31648 type.go:168] "Request Body" body=""
	I1003 18:08:53.185377   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:53.185697   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:53.685459   31648 type.go:168] "Request Body" body=""
	I1003 18:08:53.685528   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:53.685880   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:53.685930   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:54.185736   31648 type.go:168] "Request Body" body=""
	I1003 18:08:54.185800   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:54.186122   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:54.685875   31648 type.go:168] "Request Body" body=""
	I1003 18:08:54.685940   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:54.686284   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:55.185038   31648 type.go:168] "Request Body" body=""
	I1003 18:08:55.185103   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:55.185420   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:55.685122   31648 type.go:168] "Request Body" body=""
	I1003 18:08:55.685213   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:55.685505   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:55.768789   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:55.820187   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:55.820247   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:55.820271   31648 retry.go:31] will retry after 17.817355848s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:56.185846   31648 type.go:168] "Request Body" body=""
	I1003 18:08:56.185913   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:56.186233   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:56.186374   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:56.685948   31648 type.go:168] "Request Body" body=""
	I1003 18:08:56.686081   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:56.686423   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:57.185019   31648 type.go:168] "Request Body" body=""
	I1003 18:08:57.185105   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:57.185399   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:57.684931   31648 type.go:168] "Request Body" body=""
	I1003 18:08:57.685041   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:57.685319   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:58.185047   31648 type.go:168] "Request Body" body=""
	I1003 18:08:58.185109   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:58.185402   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:58.685125   31648 type.go:168] "Request Body" body=""
	I1003 18:08:58.685211   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:58.685543   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:58.685617   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:59.143069   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:59.185821   31648 type.go:168] "Request Body" body=""
	I1003 18:08:59.185917   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:59.186232   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:59.193474   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:59.193510   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:59.193527   31648 retry.go:31] will retry after 25.255183485s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:59.685108   31648 type.go:168] "Request Body" body=""
	I1003 18:08:59.685198   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:59.685504   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:00.185069   31648 type.go:168] "Request Body" body=""
	I1003 18:09:00.185163   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:00.185465   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:00.685045   31648 type.go:168] "Request Body" body=""
	I1003 18:09:00.685107   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:00.685401   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:01.185250   31648 type.go:168] "Request Body" body=""
	I1003 18:09:01.185349   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:01.185688   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:01.185754   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:01.685310   31648 type.go:168] "Request Body" body=""
	I1003 18:09:01.685402   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:01.685720   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:02.185253   31648 type.go:168] "Request Body" body=""
	I1003 18:09:02.185346   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:02.185664   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:02.685182   31648 type.go:168] "Request Body" body=""
	I1003 18:09:02.685247   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:02.685567   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:03.185121   31648 type.go:168] "Request Body" body=""
	I1003 18:09:03.185184   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:03.185472   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:03.685069   31648 type.go:168] "Request Body" body=""
	I1003 18:09:03.685140   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:03.685473   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:03.685548   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:04.185138   31648 type.go:168] "Request Body" body=""
	I1003 18:09:04.185208   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:04.185511   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:04.685397   31648 type.go:168] "Request Body" body=""
	I1003 18:09:04.685498   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:04.685815   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:05.185368   31648 type.go:168] "Request Body" body=""
	I1003 18:09:05.185430   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:05.185752   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:05.685306   31648 type.go:168] "Request Body" body=""
	I1003 18:09:05.685399   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:05.685722   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:05.685773   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:06.185506   31648 type.go:168] "Request Body" body=""
	I1003 18:09:06.185596   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:06.185889   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:06.685509   31648 type.go:168] "Request Body" body=""
	I1003 18:09:06.685600   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:06.685920   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:07.185528   31648 type.go:168] "Request Body" body=""
	I1003 18:09:07.185591   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:07.185930   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:07.685592   31648 type.go:168] "Request Body" body=""
	I1003 18:09:07.685666   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:07.686000   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:07.686050   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:08.185578   31648 type.go:168] "Request Body" body=""
	I1003 18:09:08.185676   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:08.185969   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:08.685655   31648 type.go:168] "Request Body" body=""
	I1003 18:09:08.685728   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:08.686124   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:09.185744   31648 type.go:168] "Request Body" body=""
	I1003 18:09:09.185811   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:09.186109   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:09.685870   31648 type.go:168] "Request Body" body=""
	I1003 18:09:09.685938   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:09.686249   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:09.686300   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:10.185899   31648 type.go:168] "Request Body" body=""
	I1003 18:09:10.185995   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:10.186296   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:10.684943   31648 type.go:168] "Request Body" body=""
	I1003 18:09:10.685033   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:10.685323   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:11.185004   31648 type.go:168] "Request Body" body=""
	I1003 18:09:11.185066   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:11.185370   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:11.684959   31648 type.go:168] "Request Body" body=""
	I1003 18:09:11.685050   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:11.685368   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:12.184955   31648 type.go:168] "Request Body" body=""
	I1003 18:09:12.185063   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:12.185367   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:12.185420   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:12.684941   31648 type.go:168] "Request Body" body=""
	I1003 18:09:12.685054   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:12.685356   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:13.185955   31648 type.go:168] "Request Body" body=""
	I1003 18:09:13.186031   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:13.186349   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:13.637912   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:09:13.685539   31648 type.go:168] "Request Body" body=""
	I1003 18:09:13.685624   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:13.685989   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:13.686249   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:09:13.688536   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:09:13.688567   31648 retry.go:31] will retry after 16.395640375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:09:14.185086   31648 type.go:168] "Request Body" body=""
	I1003 18:09:14.185158   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:14.185474   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:14.185528   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:14.685417   31648 type.go:168] "Request Body" body=""
	I1003 18:09:14.685504   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:14.685861   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:15.185730   31648 type.go:168] "Request Body" body=""
	I1003 18:09:15.185803   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:15.186135   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:15.685950   31648 type.go:168] "Request Body" body=""
	I1003 18:09:15.686047   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:15.686390   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:16.185313   31648 type.go:168] "Request Body" body=""
	I1003 18:09:16.185381   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:16.185711   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:16.185784   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:16.685449   31648 type.go:168] "Request Body" body=""
	I1003 18:09:16.685527   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:16.685889   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:17.185723   31648 type.go:168] "Request Body" body=""
	I1003 18:09:17.185815   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:17.186154   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:17.685963   31648 type.go:168] "Request Body" body=""
	I1003 18:09:17.686103   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:17.686430   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:18.185163   31648 type.go:168] "Request Body" body=""
	I1003 18:09:18.185228   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:18.185536   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:18.685287   31648 type.go:168] "Request Body" body=""
	I1003 18:09:18.685398   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:18.685756   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:18.685818   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:19.185602   31648 type.go:168] "Request Body" body=""
	I1003 18:09:19.185674   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:19.186025   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:19.685824   31648 type.go:168] "Request Body" body=""
	I1003 18:09:19.685902   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:19.686264   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:20.185104   31648 type.go:168] "Request Body" body=""
	I1003 18:09:20.185178   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:20.185565   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:20.685343   31648 type.go:168] "Request Body" body=""
	I1003 18:09:20.685448   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:20.685814   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:20.685865   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:21.185641   31648 type.go:168] "Request Body" body=""
	I1003 18:09:21.185717   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:21.186091   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:21.685899   31648 type.go:168] "Request Body" body=""
	I1003 18:09:21.686019   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:21.686347   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:22.185083   31648 type.go:168] "Request Body" body=""
	I1003 18:09:22.185175   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:22.185486   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:22.685245   31648 type.go:168] "Request Body" body=""
	I1003 18:09:22.685334   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:22.685730   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:23.185497   31648 type.go:168] "Request Body" body=""
	I1003 18:09:23.185562   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:23.185880   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:23.185935   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:23.685744   31648 type.go:168] "Request Body" body=""
	I1003 18:09:23.685811   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:23.686201   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:24.184964   31648 type.go:168] "Request Body" body=""
	I1003 18:09:24.185078   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:24.185397   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:24.449821   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:09:24.497529   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:09:24.499857   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:09:24.499886   31648 retry.go:31] will retry after 48.383287224s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:09:24.685468   31648 type.go:168] "Request Body" body=""
	I1003 18:09:24.685534   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:24.685867   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:25.185654   31648 type.go:168] "Request Body" body=""
	I1003 18:09:25.185748   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:25.186075   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:25.186127   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:25.685902   31648 type.go:168] "Request Body" body=""
	I1003 18:09:25.685999   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:25.686299   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:26.185018   31648 type.go:168] "Request Body" body=""
	I1003 18:09:26.185106   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:26.185414   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:26.685136   31648 type.go:168] "Request Body" body=""
	I1003 18:09:26.685216   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:26.685515   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:27.185253   31648 type.go:168] "Request Body" body=""
	I1003 18:09:27.185318   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:27.185650   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:27.685386   31648 type.go:168] "Request Body" body=""
	I1003 18:09:27.685451   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:27.685791   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:27.685845   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:28.185583   31648 type.go:168] "Request Body" body=""
	I1003 18:09:28.185675   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:28.186015   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:28.685836   31648 type.go:168] "Request Body" body=""
	I1003 18:09:28.685940   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:28.686317   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:29.185053   31648 type.go:168] "Request Body" body=""
	I1003 18:09:29.185118   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:29.185421   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:29.685145   31648 type.go:168] "Request Body" body=""
	I1003 18:09:29.685239   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:29.685545   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:30.085101   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:09:30.133826   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:09:30.136048   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:09:30.136077   31648 retry.go:31] will retry after 44.319890963s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:09:30.185379   31648 type.go:168] "Request Body" body=""
	I1003 18:09:30.185467   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:30.185752   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:30.185824   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:30.685605   31648 type.go:168] "Request Body" body=""
	I1003 18:09:30.685677   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:30.686026   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:31.185741   31648 type.go:168] "Request Body" body=""
	I1003 18:09:31.185821   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:31.186131   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:31.685990   31648 type.go:168] "Request Body" body=""
	I1003 18:09:31.686102   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:31.686418   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:32.185174   31648 type.go:168] "Request Body" body=""
	I1003 18:09:32.185268   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:32.185574   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:32.685346   31648 type.go:168] "Request Body" body=""
	I1003 18:09:32.685414   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:32.685749   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:32.685798   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:33.185523   31648 type.go:168] "Request Body" body=""
	I1003 18:09:33.185630   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:33.185973   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:33.685847   31648 type.go:168] "Request Body" body=""
	I1003 18:09:33.685917   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:33.686290   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:34.185044   31648 type.go:168] "Request Body" body=""
	I1003 18:09:34.185158   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:34.185479   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:34.685329   31648 type.go:168] "Request Body" body=""
	I1003 18:09:34.685395   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:34.685778   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:34.685850   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:35.185617   31648 type.go:168] "Request Body" body=""
	I1003 18:09:35.185711   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:35.186046   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:35.685845   31648 type.go:168] "Request Body" body=""
	I1003 18:09:35.685931   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:35.686261   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:36.184952   31648 type.go:168] "Request Body" body=""
	I1003 18:09:36.185036   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:36.185378   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:36.685083   31648 type.go:168] "Request Body" body=""
	I1003 18:09:36.685158   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:36.685526   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:37.185252   31648 type.go:168] "Request Body" body=""
	I1003 18:09:37.185333   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:37.185680   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:37.185740   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:37.685420   31648 type.go:168] "Request Body" body=""
	I1003 18:09:37.685494   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:37.685856   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:38.185680   31648 type.go:168] "Request Body" body=""
	I1003 18:09:38.185779   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:38.186105   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:38.685935   31648 type.go:168] "Request Body" body=""
	I1003 18:09:38.686035   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:38.686351   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:39.185118   31648 type.go:168] "Request Body" body=""
	I1003 18:09:39.185189   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:39.185487   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:39.685188   31648 type.go:168] "Request Body" body=""
	I1003 18:09:39.685265   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:39.685570   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:39.685631   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:40.185362   31648 type.go:168] "Request Body" body=""
	I1003 18:09:40.185457   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:40.185802   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:40.685609   31648 type.go:168] "Request Body" body=""
	I1003 18:09:40.685713   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:40.686101   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:41.186030   31648 type.go:168] "Request Body" body=""
	I1003 18:09:41.186101   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:41.186433   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:41.685075   31648 type.go:168] "Request Body" body=""
	I1003 18:09:41.685142   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:41.685469   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:42.185193   31648 type.go:168] "Request Body" body=""
	I1003 18:09:42.185257   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:42.185565   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:42.185630   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:42.685077   31648 type.go:168] "Request Body" body=""
	I1003 18:09:42.685172   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:42.685483   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:43.185219   31648 type.go:168] "Request Body" body=""
	I1003 18:09:43.185289   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:43.185605   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:43.685108   31648 type.go:168] "Request Body" body=""
	I1003 18:09:43.685175   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:43.685496   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:44.185214   31648 type.go:168] "Request Body" body=""
	I1003 18:09:44.185314   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:44.185626   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:44.185696   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:44.685443   31648 type.go:168] "Request Body" body=""
	I1003 18:09:44.685535   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:44.685860   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:45.185669   31648 type.go:168] "Request Body" body=""
	I1003 18:09:45.185734   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:45.186050   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:45.685869   31648 type.go:168] "Request Body" body=""
	I1003 18:09:45.685940   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:45.686258   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:46.184960   31648 type.go:168] "Request Body" body=""
	I1003 18:09:46.185084   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:46.185423   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:46.685149   31648 type.go:168] "Request Body" body=""
	I1003 18:09:46.685219   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:46.685543   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:46.685599   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:47.185302   31648 type.go:168] "Request Body" body=""
	I1003 18:09:47.185370   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:47.185710   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:47.685432   31648 type.go:168] "Request Body" body=""
	I1003 18:09:47.685496   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:47.685808   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:48.185599   31648 type.go:168] "Request Body" body=""
	I1003 18:09:48.185663   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:48.186043   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:48.685839   31648 type.go:168] "Request Body" body=""
	I1003 18:09:48.685931   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:48.686255   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:48.686305   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:49.185022   31648 type.go:168] "Request Body" body=""
	I1003 18:09:49.185091   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:49.185409   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:49.685097   31648 type.go:168] "Request Body" body=""
	I1003 18:09:49.685189   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:49.685510   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:50.185245   31648 type.go:168] "Request Body" body=""
	I1003 18:09:50.185317   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:50.185675   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:50.685396   31648 type.go:168] "Request Body" body=""
	I1003 18:09:50.685460   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:50.685814   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:51.185668   31648 type.go:168] "Request Body" body=""
	I1003 18:09:51.185757   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:51.186064   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:51.186116   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:51.685866   31648 type.go:168] "Request Body" body=""
	I1003 18:09:51.685934   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:51.686277   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:52.185003   31648 type.go:168] "Request Body" body=""
	I1003 18:09:52.185067   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:52.185368   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:52.685121   31648 type.go:168] "Request Body" body=""
	I1003 18:09:52.685219   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:52.685573   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:53.185280   31648 type.go:168] "Request Body" body=""
	I1003 18:09:53.185339   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:53.185633   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:53.685331   31648 type.go:168] "Request Body" body=""
	I1003 18:09:53.685395   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:53.685759   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:53.685836   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:54.185620   31648 type.go:168] "Request Body" body=""
	I1003 18:09:54.185691   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:54.186007   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:54.685714   31648 type.go:168] "Request Body" body=""
	I1003 18:09:54.685778   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:54.686135   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:55.185951   31648 type.go:168] "Request Body" body=""
	I1003 18:09:55.186058   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:55.186387   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:55.685101   31648 type.go:168] "Request Body" body=""
	I1003 18:09:55.685193   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:55.685564   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:56.185405   31648 type.go:168] "Request Body" body=""
	I1003 18:09:56.185491   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:56.185823   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:56.185874   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:56.685614   31648 type.go:168] "Request Body" body=""
	I1003 18:09:56.685702   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:56.686026   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:57.185904   31648 type.go:168] "Request Body" body=""
	I1003 18:09:57.186000   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:57.186336   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:57.685087   31648 type.go:168] "Request Body" body=""
	I1003 18:09:57.685160   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:57.685447   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:58.185160   31648 type.go:168] "Request Body" body=""
	I1003 18:09:58.185246   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:58.185558   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:58.685303   31648 type.go:168] "Request Body" body=""
	I1003 18:09:58.685365   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:58.685671   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:58.685755   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:59.185446   31648 type.go:168] "Request Body" body=""
	I1003 18:09:59.185545   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:59.185914   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:59.685737   31648 type.go:168] "Request Body" body=""
	I1003 18:09:59.685801   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:59.686146   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:00.185972   31648 type.go:168] "Request Body" body=""
	I1003 18:10:00.186075   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:00.186364   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:00.685077   31648 type.go:168] "Request Body" body=""
	I1003 18:10:00.685166   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:00.685464   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:01.185382   31648 type.go:168] "Request Body" body=""
	I1003 18:10:01.185446   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:01.185778   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:01.185830   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:01.685606   31648 type.go:168] "Request Body" body=""
	I1003 18:10:01.685677   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:01.686032   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:02.185907   31648 type.go:168] "Request Body" body=""
	I1003 18:10:02.186020   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:02.186378   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:02.685091   31648 type.go:168] "Request Body" body=""
	I1003 18:10:02.685152   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:02.685445   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:03.185142   31648 type.go:168] "Request Body" body=""
	I1003 18:10:03.185225   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:03.185561   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:03.685236   31648 type.go:168] "Request Body" body=""
	I1003 18:10:03.685339   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:03.685634   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:03.685696   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:04.185365   31648 type.go:168] "Request Body" body=""
	I1003 18:10:04.185433   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:04.185727   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:04.685562   31648 type.go:168] "Request Body" body=""
	I1003 18:10:04.685630   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:04.686027   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:05.185808   31648 type.go:168] "Request Body" body=""
	I1003 18:10:05.185875   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:05.186210   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:05.686012   31648 type.go:168] "Request Body" body=""
	I1003 18:10:05.686094   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:05.686420   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:05.686513   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:06.185220   31648 type.go:168] "Request Body" body=""
	I1003 18:10:06.185317   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:06.185670   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:06.685370   31648 type.go:168] "Request Body" body=""
	I1003 18:10:06.685434   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:06.685727   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:07.185434   31648 type.go:168] "Request Body" body=""
	I1003 18:10:07.185512   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:07.185878   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:07.685679   31648 type.go:168] "Request Body" body=""
	I1003 18:10:07.685748   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:07.686309   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:08.185067   31648 type.go:168] "Request Body" body=""
	I1003 18:10:08.185137   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:08.185459   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:08.185516   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:08.685191   31648 type.go:168] "Request Body" body=""
	I1003 18:10:08.685261   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:08.685582   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:09.185329   31648 type.go:168] "Request Body" body=""
	I1003 18:10:09.185397   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:09.185705   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:09.685441   31648 type.go:168] "Request Body" body=""
	I1003 18:10:09.685504   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:09.685840   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:10.185620   31648 type.go:168] "Request Body" body=""
	I1003 18:10:10.185689   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:10.186037   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:10.186087   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:10.685838   31648 type.go:168] "Request Body" body=""
	I1003 18:10:10.685914   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:10.686280   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:11.184954   31648 type.go:168] "Request Body" body=""
	I1003 18:10:11.185044   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:11.185353   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:11.685099   31648 type.go:168] "Request Body" body=""
	I1003 18:10:11.685168   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:11.685473   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:12.185192   31648 type.go:168] "Request Body" body=""
	I1003 18:10:12.185259   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:12.185564   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:12.685315   31648 type.go:168] "Request Body" body=""
	I1003 18:10:12.685386   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:12.685819   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:12.685875   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:12.884184   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:10:12.932382   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:10:12.934859   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:10:12.935018   31648 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1003 18:10:13.185242   31648 type.go:168] "Request Body" body=""
	I1003 18:10:13.185310   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:13.185617   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:13.685328   31648 type.go:168] "Request Body" body=""
	I1003 18:10:13.685430   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:13.685917   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:14.185730   31648 type.go:168] "Request Body" body=""
	I1003 18:10:14.185796   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:14.186122   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:14.456560   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:10:14.507486   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:10:14.509939   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:10:14.510064   31648 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1003 18:10:14.512677   31648 out.go:179] * Enabled addons: 
	I1003 18:10:14.514281   31648 addons.go:514] duration metric: took 1m59.941954445s for enable addons: enabled=[]
	I1003 18:10:14.685449   31648 type.go:168] "Request Body" body=""
	I1003 18:10:14.685516   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:14.685857   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:14.685919   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:15.185675   31648 type.go:168] "Request Body" body=""
	I1003 18:10:15.185738   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:15.186060   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:15.685871   31648 type.go:168] "Request Body" body=""
	I1003 18:10:15.685938   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:15.686263   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:16.184928   31648 type.go:168] "Request Body" body=""
	I1003 18:10:16.185033   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:16.185365   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:16.685082   31648 type.go:168] "Request Body" body=""
	I1003 18:10:16.685144   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:16.685447   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:17.185125   31648 type.go:168] "Request Body" body=""
	I1003 18:10:17.185202   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:17.185514   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:17.185563   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:17.685251   31648 type.go:168] "Request Body" body=""
	I1003 18:10:17.685320   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:17.685625   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:18.185367   31648 type.go:168] "Request Body" body=""
	I1003 18:10:18.185448   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:18.185805   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:18.685631   31648 type.go:168] "Request Body" body=""
	I1003 18:10:18.685706   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:18.686092   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:19.185904   31648 type.go:168] "Request Body" body=""
	I1003 18:10:19.185995   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:19.186318   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:19.186371   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:19.685092   31648 type.go:168] "Request Body" body=""
	I1003 18:10:19.685164   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:19.685487   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:20.185213   31648 type.go:168] "Request Body" body=""
	I1003 18:10:20.185296   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:20.185633   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:20.685398   31648 type.go:168] "Request Body" body=""
	I1003 18:10:20.685475   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:20.685780   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:21.185636   31648 type.go:168] "Request Body" body=""
	I1003 18:10:21.185711   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:21.186047   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:21.685810   31648 type.go:168] "Request Body" body=""
	I1003 18:10:21.685874   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:21.686211   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:21.686273   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:22.184932   31648 type.go:168] "Request Body" body=""
	I1003 18:10:22.185016   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:22.185357   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:22.685073   31648 type.go:168] "Request Body" body=""
	I1003 18:10:22.685138   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:22.685450   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:23.185168   31648 type.go:168] "Request Body" body=""
	I1003 18:10:23.185239   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:23.185562   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:23.685280   31648 type.go:168] "Request Body" body=""
	I1003 18:10:23.685364   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:23.685684   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:24.185432   31648 type.go:168] "Request Body" body=""
	I1003 18:10:24.185494   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:24.185826   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:24.185890   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:24.685663   31648 type.go:168] "Request Body" body=""
	I1003 18:10:24.685735   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:24.686142   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:25.185900   31648 type.go:168] "Request Body" body=""
	I1003 18:10:25.185964   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:25.186274   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:25.685013   31648 type.go:168] "Request Body" body=""
	I1003 18:10:25.685093   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:25.685422   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:26.185213   31648 type.go:168] "Request Body" body=""
	I1003 18:10:26.185323   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:26.185654   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:26.685413   31648 type.go:168] "Request Body" body=""
	I1003 18:10:26.685482   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:26.685843   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:26.685908   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:27.185654   31648 type.go:168] "Request Body" body=""
	I1003 18:10:27.185733   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:27.186080   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:27.685901   31648 type.go:168] "Request Body" body=""
	I1003 18:10:27.685968   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:27.686301   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:28.185042   31648 type.go:168] "Request Body" body=""
	I1003 18:10:28.185109   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:28.185417   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:28.685129   31648 type.go:168] "Request Body" body=""
	I1003 18:10:28.685212   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:28.685544   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:29.185277   31648 type.go:168] "Request Body" body=""
	I1003 18:10:29.185350   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:29.185667   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:29.185717   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:29.685390   31648 type.go:168] "Request Body" body=""
	I1003 18:10:29.685463   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:29.685809   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:30.185653   31648 type.go:168] "Request Body" body=""
	I1003 18:10:30.185740   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:30.186077   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:30.685885   31648 type.go:168] "Request Body" body=""
	I1003 18:10:30.685950   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:30.686302   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:31.184960   31648 type.go:168] "Request Body" body=""
	I1003 18:10:31.185039   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:31.185351   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:31.685088   31648 type.go:168] "Request Body" body=""
	I1003 18:10:31.685183   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:31.685491   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:31.685553   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:32.185245   31648 type.go:168] "Request Body" body=""
	I1003 18:10:32.185311   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:32.185616   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:32.685334   31648 type.go:168] "Request Body" body=""
	I1003 18:10:32.685427   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:32.685753   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:33.185521   31648 type.go:168] "Request Body" body=""
	I1003 18:10:33.185585   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:33.185951   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:33.685776   31648 type.go:168] "Request Body" body=""
	I1003 18:10:33.685843   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:33.686164   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:33.686226   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:34.186008   31648 type.go:168] "Request Body" body=""
	I1003 18:10:34.186076   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:34.186390   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:34.685080   31648 type.go:168] "Request Body" body=""
	I1003 18:10:34.685151   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:34.685468   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:35.185199   31648 type.go:168] "Request Body" body=""
	I1003 18:10:35.185274   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:35.185624   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:35.685334   31648 type.go:168] "Request Body" body=""
	I1003 18:10:35.685407   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:35.685728   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:36.185543   31648 type.go:168] "Request Body" body=""
	I1003 18:10:36.185617   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:36.185950   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:36.186025   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:36.685767   31648 type.go:168] "Request Body" body=""
	I1003 18:10:36.685830   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:36.686160   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:37.185965   31648 type.go:168] "Request Body" body=""
	I1003 18:10:37.186062   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:37.186419   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:37.685168   31648 type.go:168] "Request Body" body=""
	I1003 18:10:37.685233   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:37.685563   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:38.185271   31648 type.go:168] "Request Body" body=""
	I1003 18:10:38.185345   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:38.185657   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:38.685369   31648 type.go:168] "Request Body" body=""
	I1003 18:10:38.685433   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:38.685746   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:38.685800   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:39.185560   31648 type.go:168] "Request Body" body=""
	I1003 18:10:39.185640   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:39.185997   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:39.685784   31648 type.go:168] "Request Body" body=""
	I1003 18:10:39.685851   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:39.686184   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:40.185949   31648 type.go:168] "Request Body" body=""
	I1003 18:10:40.186046   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:40.186401   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:40.685071   31648 type.go:168] "Request Body" body=""
	I1003 18:10:40.685152   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:40.685459   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:41.185267   31648 type.go:168] "Request Body" body=""
	I1003 18:10:41.185334   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:41.185637   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:41.185700   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:41.685380   31648 type.go:168] "Request Body" body=""
	I1003 18:10:41.685445   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:41.685830   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:42.185632   31648 type.go:168] "Request Body" body=""
	I1003 18:10:42.185724   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:42.186063   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:42.685859   31648 type.go:168] "Request Body" body=""
	I1003 18:10:42.685933   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:42.686273   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:43.185018   31648 type.go:168] "Request Body" body=""
	I1003 18:10:43.185089   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:43.185411   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:43.685086   31648 type.go:168] "Request Body" body=""
	I1003 18:10:43.685152   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:43.685478   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:43.685542   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:44.185259   31648 type.go:168] "Request Body" body=""
	I1003 18:10:44.185327   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:44.185679   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:44.685473   31648 type.go:168] "Request Body" body=""
	I1003 18:10:44.685537   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:44.685872   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:45.185684   31648 type.go:168] "Request Body" body=""
	I1003 18:10:45.185759   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:45.186086   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:45.685880   31648 type.go:168] "Request Body" body=""
	I1003 18:10:45.685945   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:45.686284   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:45.686349   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:46.184919   31648 type.go:168] "Request Body" body=""
	I1003 18:10:46.185021   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:46.185345   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:46.685069   31648 type.go:168] "Request Body" body=""
	I1003 18:10:46.685147   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:46.685496   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:47.185204   31648 type.go:168] "Request Body" body=""
	I1003 18:10:47.185304   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:47.185613   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:47.685395   31648 type.go:168] "Request Body" body=""
	I1003 18:10:47.685473   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:47.685844   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:48.185624   31648 type.go:168] "Request Body" body=""
	I1003 18:10:48.185707   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:48.186048   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:48.186105   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:48.685861   31648 type.go:168] "Request Body" body=""
	I1003 18:10:48.685948   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:48.686324   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:49.185066   31648 type.go:168] "Request Body" body=""
	I1003 18:10:49.185176   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:49.185503   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:49.685237   31648 type.go:168] "Request Body" body=""
	I1003 18:10:49.685317   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:49.685703   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:50.185462   31648 type.go:168] "Request Body" body=""
	I1003 18:10:50.185540   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:50.185875   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:50.685684   31648 type.go:168] "Request Body" body=""
	I1003 18:10:50.685764   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:50.686154   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:50.686209   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:51.185959   31648 type.go:168] "Request Body" body=""
	I1003 18:10:51.186061   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:51.186411   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:51.685154   31648 type.go:168] "Request Body" body=""
	I1003 18:10:51.685222   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:51.685506   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:52.185254   31648 type.go:168] "Request Body" body=""
	I1003 18:10:52.185335   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:52.185690   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:52.685398   31648 type.go:168] "Request Body" body=""
	I1003 18:10:52.685466   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:52.685770   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:53.185621   31648 type.go:168] "Request Body" body=""
	I1003 18:10:53.185692   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:53.186039   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:53.186109   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:53.685850   31648 type.go:168] "Request Body" body=""
	I1003 18:10:53.685914   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:53.686255   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:54.185017   31648 type.go:168] "Request Body" body=""
	I1003 18:10:54.185080   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:54.185397   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:54.685078   31648 type.go:168] "Request Body" body=""
	I1003 18:10:54.685145   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:54.685459   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:55.185159   31648 type.go:168] "Request Body" body=""
	I1003 18:10:55.185227   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:55.185528   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:55.685211   31648 type.go:168] "Request Body" body=""
	I1003 18:10:55.685279   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:55.685586   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:55.685652   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:56.185352   31648 type.go:168] "Request Body" body=""
	I1003 18:10:56.185430   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:56.185759   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:56.685531   31648 type.go:168] "Request Body" body=""
	I1003 18:10:56.685600   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:56.685922   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:57.185723   31648 type.go:168] "Request Body" body=""
	I1003 18:10:57.185811   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:57.186156   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:57.685922   31648 type.go:168] "Request Body" body=""
	I1003 18:10:57.686010   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:57.686316   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:57.686367   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:58.185097   31648 type.go:168] "Request Body" body=""
	I1003 18:10:58.185187   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:58.185535   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:58.685089   31648 type.go:168] "Request Body" body=""
	I1003 18:10:58.685158   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:58.685458   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:59.185180   31648 type.go:168] "Request Body" body=""
	I1003 18:10:59.185260   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:59.185605   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:59.685329   31648 type.go:168] "Request Body" body=""
	I1003 18:10:59.685409   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:59.685768   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:00.185577   31648 type.go:168] "Request Body" body=""
	I1003 18:11:00.185644   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:00.185968   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:00.186053   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:00.685767   31648 type.go:168] "Request Body" body=""
	I1003 18:11:00.685853   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:00.686208   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:01.185912   31648 type.go:168] "Request Body" body=""
	I1003 18:11:01.186001   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:01.186311   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:01.685060   31648 type.go:168] "Request Body" body=""
	I1003 18:11:01.685173   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:01.685511   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:02.185272   31648 type.go:168] "Request Body" body=""
	I1003 18:11:02.185343   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:02.185674   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:02.685366   31648 type.go:168] "Request Body" body=""
	I1003 18:11:02.685447   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:02.685807   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:02.685860   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:03.185586   31648 type.go:168] "Request Body" body=""
	I1003 18:11:03.185653   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:03.186010   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:03.685810   31648 type.go:168] "Request Body" body=""
	I1003 18:11:03.685892   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:03.686241   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:04.184939   31648 type.go:168] "Request Body" body=""
	I1003 18:11:04.185023   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:04.185312   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:04.685060   31648 type.go:168] "Request Body" body=""
	I1003 18:11:04.685143   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:04.685467   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:05.185189   31648 type.go:168] "Request Body" body=""
	I1003 18:11:05.185258   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:05.185567   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:05.185625   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:05.685299   31648 type.go:168] "Request Body" body=""
	I1003 18:11:05.685378   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:05.685703   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:06.185511   31648 type.go:168] "Request Body" body=""
	I1003 18:11:06.185600   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:06.185915   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:06.685750   31648 type.go:168] "Request Body" body=""
	I1003 18:11:06.685834   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:06.686186   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:07.185989   31648 type.go:168] "Request Body" body=""
	I1003 18:11:07.186058   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:07.186369   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:07.186436   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:07.685126   31648 type.go:168] "Request Body" body=""
	I1003 18:11:07.685203   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:07.685514   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:08.185223   31648 type.go:168] "Request Body" body=""
	I1003 18:11:08.185315   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:08.185627   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:08.685356   31648 type.go:168] "Request Body" body=""
	I1003 18:11:08.685469   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:08.685819   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:09.185588   31648 type.go:168] "Request Body" body=""
	I1003 18:11:09.185655   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:09.186048   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:09.685858   31648 type.go:168] "Request Body" body=""
	I1003 18:11:09.685945   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:09.686291   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:09.686344   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:10.185028   31648 type.go:168] "Request Body" body=""
	I1003 18:11:10.185112   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:10.185419   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:10.685125   31648 type.go:168] "Request Body" body=""
	I1003 18:11:10.685235   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:10.685580   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:11.185333   31648 type.go:168] "Request Body" body=""
	I1003 18:11:11.185400   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:11.185721   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:11.685427   31648 type.go:168] "Request Body" body=""
	I1003 18:11:11.685540   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:11.685876   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:12.185659   31648 type.go:168] "Request Body" body=""
	I1003 18:11:12.185756   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:12.186078   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:12.186142   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:12.685887   31648 type.go:168] "Request Body" body=""
	I1003 18:11:12.685959   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:12.686282   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:13.185003   31648 type.go:168] "Request Body" body=""
	I1003 18:11:13.185081   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:13.185409   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:13.685094   31648 type.go:168] "Request Body" body=""
	I1003 18:11:13.685164   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:13.685478   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:14.185184   31648 type.go:168] "Request Body" body=""
	I1003 18:11:14.185260   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:14.185598   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:14.685408   31648 type.go:168] "Request Body" body=""
	I1003 18:11:14.685477   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:14.685794   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:14.685865   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:15.185614   31648 type.go:168] "Request Body" body=""
	I1003 18:11:15.185690   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:15.186097   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:15.685915   31648 type.go:168] "Request Body" body=""
	I1003 18:11:15.686020   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:15.686331   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:16.185164   31648 type.go:168] "Request Body" body=""
	I1003 18:11:16.185233   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:16.185540   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:16.685230   31648 type.go:168] "Request Body" body=""
	I1003 18:11:16.685290   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:16.685601   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:17.185312   31648 type.go:168] "Request Body" body=""
	I1003 18:11:17.185380   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:17.185697   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:17.185779   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:17.685436   31648 type.go:168] "Request Body" body=""
	I1003 18:11:17.685502   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:17.685845   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:18.185654   31648 type.go:168] "Request Body" body=""
	I1003 18:11:18.185717   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:18.186072   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:18.685861   31648 type.go:168] "Request Body" body=""
	I1003 18:11:18.685924   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:18.686240   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:19.185000   31648 type.go:168] "Request Body" body=""
	I1003 18:11:19.185076   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:19.185392   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:19.685130   31648 type.go:168] "Request Body" body=""
	I1003 18:11:19.685199   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:19.685540   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:19.685603   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:20.185304   31648 type.go:168] "Request Body" body=""
	I1003 18:11:20.185368   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:20.185692   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:20.685437   31648 type.go:168] "Request Body" body=""
	I1003 18:11:20.685512   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:20.685889   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:21.185654   31648 type.go:168] "Request Body" body=""
	I1003 18:11:21.185736   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:21.186088   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:21.685864   31648 type.go:168] "Request Body" body=""
	I1003 18:11:21.685950   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:21.686257   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:21.686310   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:22.185029   31648 type.go:168] "Request Body" body=""
	I1003 18:11:22.185128   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:22.185448   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:22.685177   31648 type.go:168] "Request Body" body=""
	I1003 18:11:22.685257   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:22.685561   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:23.185277   31648 type.go:168] "Request Body" body=""
	I1003 18:11:23.185353   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:23.185666   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:23.685362   31648 type.go:168] "Request Body" body=""
	I1003 18:11:23.685435   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:23.685751   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:24.185475   31648 type.go:168] "Request Body" body=""
	I1003 18:11:24.185552   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:24.185910   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:24.185963   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:24.685584   31648 type.go:168] "Request Body" body=""
	I1003 18:11:24.685659   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:24.685971   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:25.185758   31648 type.go:168] "Request Body" body=""
	I1003 18:11:25.185842   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:25.186204   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:25.685956   31648 type.go:168] "Request Body" body=""
	I1003 18:11:25.686040   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:25.686348   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:26.185071   31648 type.go:168] "Request Body" body=""
	I1003 18:11:26.185144   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:26.185483   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:26.685189   31648 type.go:168] "Request Body" body=""
	I1003 18:11:26.685255   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:26.685555   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:26.685624   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:27.185293   31648 type.go:168] "Request Body" body=""
	I1003 18:11:27.185364   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:27.185670   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:27.685353   31648 type.go:168] "Request Body" body=""
	I1003 18:11:27.685417   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:27.685713   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:28.185462   31648 type.go:168] "Request Body" body=""
	I1003 18:11:28.185529   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:28.185838   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:28.685636   31648 type.go:168] "Request Body" body=""
	I1003 18:11:28.685711   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:28.686033   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:28.686095   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:29.185891   31648 type.go:168] "Request Body" body=""
	I1003 18:11:29.185959   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:29.186289   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:29.684999   31648 type.go:168] "Request Body" body=""
	I1003 18:11:29.685063   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:29.685358   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:30.185079   31648 type.go:168] "Request Body" body=""
	I1003 18:11:30.185147   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:30.185448   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:30.685153   31648 type.go:168] "Request Body" body=""
	I1003 18:11:30.685224   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:30.685542   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:31.185387   31648 type.go:168] "Request Body" body=""
	I1003 18:11:31.185470   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:31.185801   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:31.185869   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:31.685601   31648 type.go:168] "Request Body" body=""
	I1003 18:11:31.685665   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:31.686013   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:32.185823   31648 type.go:168] "Request Body" body=""
	I1003 18:11:32.185918   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:32.186314   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:32.685025   31648 type.go:168] "Request Body" body=""
	I1003 18:11:32.685090   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:32.685396   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:33.185093   31648 type.go:168] "Request Body" body=""
	I1003 18:11:33.185177   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:33.185492   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:33.685174   31648 type.go:168] "Request Body" body=""
	I1003 18:11:33.685294   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:33.685598   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:33.685653   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:34.185347   31648 type.go:168] "Request Body" body=""
	I1003 18:11:34.185424   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:34.185757   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:34.685584   31648 type.go:168] "Request Body" body=""
	I1003 18:11:34.685700   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:34.686040   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:35.185805   31648 type.go:168] "Request Body" body=""
	I1003 18:11:35.185867   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:35.186199   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:35.685954   31648 type.go:168] "Request Body" body=""
	I1003 18:11:35.686050   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:35.686359   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:35.686411   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:36.185172   31648 type.go:168] "Request Body" body=""
	I1003 18:11:36.185238   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:36.185535   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:36.685215   31648 type.go:168] "Request Body" body=""
	I1003 18:11:36.685302   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:36.685612   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:37.185339   31648 type.go:168] "Request Body" body=""
	I1003 18:11:37.185403   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:37.185728   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:37.685401   31648 type.go:168] "Request Body" body=""
	I1003 18:11:37.685477   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:37.685800   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:38.185642   31648 type.go:168] "Request Body" body=""
	I1003 18:11:38.185720   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:38.186056   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:38.186115   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:38.685846   31648 type.go:168] "Request Body" body=""
	I1003 18:11:38.685908   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:38.686230   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:39.184965   31648 type.go:168] "Request Body" body=""
	I1003 18:11:39.185068   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:39.185389   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:39.685076   31648 type.go:168] "Request Body" body=""
	I1003 18:11:39.685138   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:39.685429   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:40.185151   31648 type.go:168] "Request Body" body=""
	I1003 18:11:40.185227   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:40.185552   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:40.685234   31648 type.go:168] "Request Body" body=""
	I1003 18:11:40.685299   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:40.685612   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:40.685679   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:41.185407   31648 type.go:168] "Request Body" body=""
	I1003 18:11:41.185475   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:41.185810   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:41.685588   31648 type.go:168] "Request Body" body=""
	I1003 18:11:41.685663   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:41.685999   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:42.185821   31648 type.go:168] "Request Body" body=""
	I1003 18:11:42.185909   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:42.186287   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:42.685035   31648 type.go:168] "Request Body" body=""
	I1003 18:11:42.685109   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:42.685460   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:43.185163   31648 type.go:168] "Request Body" body=""
	I1003 18:11:43.185226   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:43.185569   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:43.185640   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:43.685320   31648 type.go:168] "Request Body" body=""
	I1003 18:11:43.685387   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:43.685687   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:44.185376   31648 type.go:168] "Request Body" body=""
	I1003 18:11:44.185445   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:44.185795   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:44.685599   31648 type.go:168] "Request Body" body=""
	I1003 18:11:44.685672   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:44.686013   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:45.185797   31648 type.go:168] "Request Body" body=""
	I1003 18:11:45.185863   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:45.186210   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:45.186272   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:45.684943   31648 type.go:168] "Request Body" body=""
	I1003 18:11:45.685023   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:45.685323   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:46.184972   31648 type.go:168] "Request Body" body=""
	I1003 18:11:46.185063   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:46.185368   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:46.685078   31648 type.go:168] "Request Body" body=""
	I1003 18:11:46.685143   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:46.685436   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:47.185171   31648 type.go:168] "Request Body" body=""
	I1003 18:11:47.185237   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:47.185530   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:47.685229   31648 type.go:168] "Request Body" body=""
	I1003 18:11:47.685292   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:47.685573   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:47.685625   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:48.185308   31648 type.go:168] "Request Body" body=""
	I1003 18:11:48.185378   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:48.185726   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:48.685435   31648 type.go:168] "Request Body" body=""
	I1003 18:11:48.685502   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:48.685818   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:49.185572   31648 type.go:168] "Request Body" body=""
	I1003 18:11:49.185639   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:49.185951   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:49.685755   31648 type.go:168] "Request Body" body=""
	I1003 18:11:49.685820   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:49.686165   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:49.686226   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:50.185972   31648 type.go:168] "Request Body" body=""
	I1003 18:11:50.186049   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:50.186347   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:50.685077   31648 type.go:168] "Request Body" body=""
	I1003 18:11:50.685149   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:50.685487   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:51.185355   31648 type.go:168] "Request Body" body=""
	I1003 18:11:51.185423   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:51.185749   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:51.685438   31648 type.go:168] "Request Body" body=""
	I1003 18:11:51.685502   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:51.685808   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:52.185581   31648 type.go:168] "Request Body" body=""
	I1003 18:11:52.185644   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:52.185967   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:52.186043   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:52.685763   31648 type.go:168] "Request Body" body=""
	I1003 18:11:52.685866   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:52.686218   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:53.184953   31648 type.go:168] "Request Body" body=""
	I1003 18:11:53.185051   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:53.185365   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:53.685069   31648 type.go:168] "Request Body" body=""
	I1003 18:11:53.685143   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:53.685457   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:54.185161   31648 type.go:168] "Request Body" body=""
	I1003 18:11:54.185226   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:54.185562   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:54.685310   31648 type.go:168] "Request Body" body=""
	I1003 18:11:54.685387   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:54.685726   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:54.685776   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:55.185417   31648 type.go:168] "Request Body" body=""
	I1003 18:11:55.185483   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:55.185815   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:55.685573   31648 type.go:168] "Request Body" body=""
	I1003 18:11:55.685677   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:55.686027   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:56.185731   31648 type.go:168] "Request Body" body=""
	I1003 18:11:56.185792   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:56.186116   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:56.685906   31648 type.go:168] "Request Body" body=""
	I1003 18:11:56.686004   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:56.686321   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:56.686379   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:57.185067   31648 type.go:168] "Request Body" body=""
	I1003 18:11:57.185134   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:57.185426   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:57.685144   31648 type.go:168] "Request Body" body=""
	I1003 18:11:57.685226   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:57.685539   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:58.185226   31648 type.go:168] "Request Body" body=""
	I1003 18:11:58.185291   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:58.185597   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:58.685288   31648 type.go:168] "Request Body" body=""
	I1003 18:11:58.685373   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:58.685689   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:59.185369   31648 type.go:168] "Request Body" body=""
	I1003 18:11:59.185441   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:59.185768   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:59.185831   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:59.685575   31648 type.go:168] "Request Body" body=""
	I1003 18:11:59.685674   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:59.686024   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:00.185851   31648 type.go:168] "Request Body" body=""
	I1003 18:12:00.185922   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:00.186234   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:00.684953   31648 type.go:168] "Request Body" body=""
	I1003 18:12:00.685062   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:00.685403   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:01.185179   31648 type.go:168] "Request Body" body=""
	I1003 18:12:01.185248   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:01.185572   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:01.685293   31648 type.go:168] "Request Body" body=""
	I1003 18:12:01.685376   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:01.685710   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:01.685766   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:02.185411   31648 type.go:168] "Request Body" body=""
	I1003 18:12:02.185478   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:02.185826   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:02.685596   31648 type.go:168] "Request Body" body=""
	I1003 18:12:02.685688   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:02.686031   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:03.185821   31648 type.go:168] "Request Body" body=""
	I1003 18:12:03.185887   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:03.186235   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:03.684941   31648 type.go:168] "Request Body" body=""
	I1003 18:12:03.685043   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:03.685366   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:04.185065   31648 type.go:168] "Request Body" body=""
	I1003 18:12:04.185133   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:04.185448   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:04.185500   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:04.685256   31648 type.go:168] "Request Body" body=""
	I1003 18:12:04.685332   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:04.685650   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:05.185329   31648 type.go:168] "Request Body" body=""
	I1003 18:12:05.185398   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:05.185718   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:05.685410   31648 type.go:168] "Request Body" body=""
	I1003 18:12:05.685475   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:05.685794   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:06.185563   31648 type.go:168] "Request Body" body=""
	I1003 18:12:06.185632   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:06.185948   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:06.186035   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:06.685752   31648 type.go:168] "Request Body" body=""
	I1003 18:12:06.685824   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:06.686177   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:07.185942   31648 type.go:168] "Request Body" body=""
	I1003 18:12:07.186020   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:07.186318   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:07.685031   31648 type.go:168] "Request Body" body=""
	I1003 18:12:07.685100   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:07.685424   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:08.185310   31648 type.go:168] "Request Body" body=""
	I1003 18:12:08.185557   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:08.186174   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:08.186246   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:08.685021   31648 type.go:168] "Request Body" body=""
	I1003 18:12:08.685163   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:08.685624   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:09.185153   31648 type.go:168] "Request Body" body=""
	I1003 18:12:09.185228   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:09.185529   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:09.685080   31648 type.go:168] "Request Body" body=""
	I1003 18:12:09.685150   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:09.685445   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:10.185696   31648 type.go:168] "Request Body" body=""
	I1003 18:12:10.185761   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:10.186171   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:10.685822   31648 type.go:168] "Request Body" body=""
	I1003 18:12:10.685891   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:10.686201   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:10.686266   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:11.184920   31648 type.go:168] "Request Body" body=""
	I1003 18:12:11.185025   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:11.185378   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:11.684920   31648 type.go:168] "Request Body" body=""
	I1003 18:12:11.685033   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:11.685353   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:12.186032   31648 type.go:168] "Request Body" body=""
	I1003 18:12:12.186096   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:12.186405   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:12.685015   31648 type.go:168] "Request Body" body=""
	I1003 18:12:12.685091   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:12.685409   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:13.185019   31648 type.go:168] "Request Body" body=""
	I1003 18:12:13.185093   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:13.185404   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:13.185456   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:13.685017   31648 type.go:168] "Request Body" body=""
	I1003 18:12:13.685098   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:13.685420   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:14.185003   31648 type.go:168] "Request Body" body=""
	I1003 18:12:14.185073   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:14.185375   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:14.685353   31648 type.go:168] "Request Body" body=""
	I1003 18:12:14.685425   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:14.685732   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:15.185329   31648 type.go:168] "Request Body" body=""
	I1003 18:12:15.185393   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:15.185699   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:15.185756   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:15.685287   31648 type.go:168] "Request Body" body=""
	I1003 18:12:15.685366   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:15.685696   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:16.185545   31648 type.go:168] "Request Body" body=""
	I1003 18:12:16.185614   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:16.185938   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:16.685555   31648 type.go:168] "Request Body" body=""
	I1003 18:12:16.685672   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:16.686031   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:17.185708   31648 type.go:168] "Request Body" body=""
	I1003 18:12:17.185775   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:17.186072   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:17.186122   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:17.685745   31648 type.go:168] "Request Body" body=""
	I1003 18:12:17.685826   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:17.686169   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:18.185895   31648 type.go:168] "Request Body" body=""
	I1003 18:12:18.185966   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:18.186347   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:18.685985   31648 type.go:168] "Request Body" body=""
	I1003 18:12:18.686065   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:18.686377   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:19.185028   31648 type.go:168] "Request Body" body=""
	I1003 18:12:19.185094   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:19.185404   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:19.684993   31648 type.go:168] "Request Body" body=""
	I1003 18:12:19.685067   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:19.685365   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:19.685419   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:20.184966   31648 type.go:168] "Request Body" body=""
	I1003 18:12:20.185059   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:20.185369   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:20.684968   31648 type.go:168] "Request Body" body=""
	I1003 18:12:20.685064   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:20.685377   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:21.185199   31648 type.go:168] "Request Body" body=""
	I1003 18:12:21.185268   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:21.185584   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:21.685182   31648 type.go:168] "Request Body" body=""
	I1003 18:12:21.685270   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:21.685589   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:21.685651   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:22.185158   31648 type.go:168] "Request Body" body=""
	I1003 18:12:22.185226   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:22.185552   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:22.685092   31648 type.go:168] "Request Body" body=""
	I1003 18:12:22.685168   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:22.685483   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:23.185069   31648 type.go:168] "Request Body" body=""
	I1003 18:12:23.185132   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:23.185442   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:23.685074   31648 type.go:168] "Request Body" body=""
	I1003 18:12:23.685147   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:23.685472   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:24.185079   31648 type.go:168] "Request Body" body=""
	I1003 18:12:24.185152   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:24.185468   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:24.185523   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:24.685267   31648 type.go:168] "Request Body" body=""
	I1003 18:12:24.685328   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:24.685633   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:25.185201   31648 type.go:168] "Request Body" body=""
	I1003 18:12:25.185267   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:25.185577   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:25.685147   31648 type.go:168] "Request Body" body=""
	I1003 18:12:25.685221   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:25.685537   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:26.185376   31648 type.go:168] "Request Body" body=""
	I1003 18:12:26.185445   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:26.185763   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:26.185815   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:26.685320   31648 type.go:168] "Request Body" body=""
	I1003 18:12:26.685398   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:26.685732   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:27.185386   31648 type.go:168] "Request Body" body=""
	I1003 18:12:27.185456   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:27.185774   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:27.685332   31648 type.go:168] "Request Body" body=""
	I1003 18:12:27.685409   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:27.685755   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:28.185323   31648 type.go:168] "Request Body" body=""
	I1003 18:12:28.185387   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:28.185709   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:28.685266   31648 type.go:168] "Request Body" body=""
	I1003 18:12:28.685343   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:28.685731   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:28.685797   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:29.185293   31648 type.go:168] "Request Body" body=""
	I1003 18:12:29.185362   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:29.185681   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:29.685253   31648 type.go:168] "Request Body" body=""
	I1003 18:12:29.685341   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:29.685670   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:30.185273   31648 type.go:168] "Request Body" body=""
	I1003 18:12:30.185336   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:30.185638   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:30.685205   31648 type.go:168] "Request Body" body=""
	I1003 18:12:30.685285   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:30.685586   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:31.185396   31648 type.go:168] "Request Body" body=""
	I1003 18:12:31.185471   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:31.185833   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:31.185890   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:31.685435   31648 type.go:168] "Request Body" body=""
	I1003 18:12:31.685517   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:31.685844   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:32.185392   31648 type.go:168] "Request Body" body=""
	I1003 18:12:32.185458   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:32.185764   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:32.685377   31648 type.go:168] "Request Body" body=""
	I1003 18:12:32.685464   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:32.685795   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:33.185359   31648 type.go:168] "Request Body" body=""
	I1003 18:12:33.185426   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:33.185740   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:33.685326   31648 type.go:168] "Request Body" body=""
	I1003 18:12:33.685407   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:33.685749   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:33.685805   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:34.185324   31648 type.go:168] "Request Body" body=""
	I1003 18:12:34.185391   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:34.185798   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:34.685697   31648 type.go:168] "Request Body" body=""
	I1003 18:12:34.685778   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:34.686147   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:35.185833   31648 type.go:168] "Request Body" body=""
	I1003 18:12:35.185908   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:35.186230   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:35.685876   31648 type.go:168] "Request Body" body=""
	I1003 18:12:35.685957   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:35.686342   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:35.686404   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:36.185025   31648 type.go:168] "Request Body" body=""
	I1003 18:12:36.185106   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:36.185455   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:36.685049   31648 type.go:168] "Request Body" body=""
	I1003 18:12:36.685129   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:36.685448   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:37.185016   31648 type.go:168] "Request Body" body=""
	I1003 18:12:37.185089   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:37.185408   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:37.685018   31648 type.go:168] "Request Body" body=""
	I1003 18:12:37.685090   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:37.685418   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:38.184968   31648 type.go:168] "Request Body" body=""
	I1003 18:12:38.185058   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:38.185369   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:38.185426   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:38.684922   31648 type.go:168] "Request Body" body=""
	I1003 18:12:38.685020   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:38.685336   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:39.186015   31648 type.go:168] "Request Body" body=""
	I1003 18:12:39.186082   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:39.186391   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:39.684964   31648 type.go:168] "Request Body" body=""
	I1003 18:12:39.685064   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:39.685384   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:40.185016   31648 type.go:168] "Request Body" body=""
	I1003 18:12:40.185081   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:40.185399   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:40.185451   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:40.685023   31648 type.go:168] "Request Body" body=""
	I1003 18:12:40.685100   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:40.685415   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:41.185286   31648 type.go:168] "Request Body" body=""
	I1003 18:12:41.185356   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:41.185704   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:41.685271   31648 type.go:168] "Request Body" body=""
	I1003 18:12:41.685345   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:41.685676   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:42.185232   31648 type.go:168] "Request Body" body=""
	I1003 18:12:42.185297   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:42.185603   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:42.185677   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:42.685166   31648 type.go:168] "Request Body" body=""
	I1003 18:12:42.685261   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:42.685582   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:43.185142   31648 type.go:168] "Request Body" body=""
	I1003 18:12:43.185210   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:43.185530   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:43.685335   31648 type.go:168] "Request Body" body=""
	I1003 18:12:43.685517   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:43.686011   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:44.185546   31648 type.go:168] "Request Body" body=""
	I1003 18:12:44.185637   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:44.185952   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:44.186027   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:44.685689   31648 type.go:168] "Request Body" body=""
	I1003 18:12:44.685790   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:44.686111   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:45.185834   31648 type.go:168] "Request Body" body=""
	I1003 18:12:45.185923   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:45.186247   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:45.685720   31648 type.go:168] "Request Body" body=""
	I1003 18:12:45.685788   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:45.686128   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:46.185754   31648 type.go:168] "Request Body" body=""
	I1003 18:12:46.185839   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:46.186221   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:46.186277   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:46.685820   31648 type.go:168] "Request Body" body=""
	I1003 18:12:46.685886   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:46.686208   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:47.185851   31648 type.go:168] "Request Body" body=""
	I1003 18:12:47.185923   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:47.186245   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:47.685882   31648 type.go:168] "Request Body" body=""
	I1003 18:12:47.685947   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:47.686262   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:48.185908   31648 type.go:168] "Request Body" body=""
	I1003 18:12:48.185999   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:48.186381   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:48.186430   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:48.686002   31648 type.go:168] "Request Body" body=""
	I1003 18:12:48.686088   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:48.686447   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:49.185029   31648 type.go:168] "Request Body" body=""
	I1003 18:12:49.185102   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:49.185407   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:49.685003   31648 type.go:168] "Request Body" body=""
	I1003 18:12:49.685079   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:49.685399   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:50.184995   31648 type.go:168] "Request Body" body=""
	I1003 18:12:50.185063   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:50.185376   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:50.685005   31648 type.go:168] "Request Body" body=""
	I1003 18:12:50.685086   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:50.685402   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:50.685457   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:51.185264   31648 type.go:168] "Request Body" body=""
	I1003 18:12:51.185331   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:51.185656   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:51.685186   31648 type.go:168] "Request Body" body=""
	I1003 18:12:51.685261   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:51.685581   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:52.185171   31648 type.go:168] "Request Body" body=""
	I1003 18:12:52.185246   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:52.185567   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:52.685150   31648 type.go:168] "Request Body" body=""
	I1003 18:12:52.685238   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:52.685565   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:52.685619   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:53.185114   31648 type.go:168] "Request Body" body=""
	I1003 18:12:53.185178   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:53.185492   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:53.685072   31648 type.go:168] "Request Body" body=""
	I1003 18:12:53.685148   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:53.685473   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:54.185075   31648 type.go:168] "Request Body" body=""
	I1003 18:12:54.185146   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:54.185455   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:54.685278   31648 type.go:168] "Request Body" body=""
	I1003 18:12:54.685361   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:54.685694   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:54.685749   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:55.185253   31648 type.go:168] "Request Body" body=""
	I1003 18:12:55.185324   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:55.185627   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:55.685205   31648 type.go:168] "Request Body" body=""
	I1003 18:12:55.685291   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:55.685628   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:56.185471   31648 type.go:168] "Request Body" body=""
	I1003 18:12:56.185542   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:56.185859   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:56.685418   31648 type.go:168] "Request Body" body=""
	I1003 18:12:56.685501   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:56.685842   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:56.685903   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:57.185408   31648 type.go:168] "Request Body" body=""
	I1003 18:12:57.185483   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:57.185825   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:57.685392   31648 type.go:168] "Request Body" body=""
	I1003 18:12:57.685471   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:57.685812   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:58.185364   31648 type.go:168] "Request Body" body=""
	I1003 18:12:58.185431   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:58.185736   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:58.685296   31648 type.go:168] "Request Body" body=""
	I1003 18:12:58.685379   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:58.685735   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:59.185312   31648 type.go:168] "Request Body" body=""
	I1003 18:12:59.185381   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:59.185710   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:59.185769   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:59.685328   31648 type.go:168] "Request Body" body=""
	I1003 18:12:59.685404   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:59.685769   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:00.185320   31648 type.go:168] "Request Body" body=""
	I1003 18:13:00.185386   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:00.185713   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:00.685362   31648 type.go:168] "Request Body" body=""
	I1003 18:13:00.685457   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:00.685823   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:01.185697   31648 type.go:168] "Request Body" body=""
	I1003 18:13:01.185765   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:01.186114   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:01.186172   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:01.685762   31648 type.go:168] "Request Body" body=""
	I1003 18:13:01.685852   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:01.686240   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:02.185865   31648 type.go:168] "Request Body" body=""
	I1003 18:13:02.185951   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:02.186283   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:02.685917   31648 type.go:168] "Request Body" body=""
	I1003 18:13:02.686014   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:02.686332   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:03.185942   31648 type.go:168] "Request Body" body=""
	I1003 18:13:03.186032   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:03.186345   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:03.186397   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:03.684942   31648 type.go:168] "Request Body" body=""
	I1003 18:13:03.685055   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:03.685383   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:04.184939   31648 type.go:168] "Request Body" body=""
	I1003 18:13:04.185041   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:04.185351   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:04.685279   31648 type.go:168] "Request Body" body=""
	I1003 18:13:04.685358   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:04.685695   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:05.185233   31648 type.go:168] "Request Body" body=""
	I1003 18:13:05.185306   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:05.185608   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:05.685179   31648 type.go:168] "Request Body" body=""
	I1003 18:13:05.685255   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:05.685582   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:05.685657   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:06.185409   31648 type.go:168] "Request Body" body=""
	I1003 18:13:06.185478   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:06.185807   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:06.685397   31648 type.go:168] "Request Body" body=""
	I1003 18:13:06.685483   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:06.685824   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:07.185410   31648 type.go:168] "Request Body" body=""
	I1003 18:13:07.185478   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:07.185799   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:07.685361   31648 type.go:168] "Request Body" body=""
	I1003 18:13:07.685444   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:07.685776   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:07.685829   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:08.185354   31648 type.go:168] "Request Body" body=""
	I1003 18:13:08.185422   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:08.185738   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:08.685299   31648 type.go:168] "Request Body" body=""
	I1003 18:13:08.685380   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:08.685725   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:09.185279   31648 type.go:168] "Request Body" body=""
	I1003 18:13:09.185348   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:09.185678   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:09.685236   31648 type.go:168] "Request Body" body=""
	I1003 18:13:09.685312   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:09.685643   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:10.185169   31648 type.go:168] "Request Body" body=""
	I1003 18:13:10.185241   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:10.185552   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:10.185605   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:10.685136   31648 type.go:168] "Request Body" body=""
	I1003 18:13:10.685223   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:10.685575   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:11.185384   31648 type.go:168] "Request Body" body=""
	I1003 18:13:11.185459   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:11.185788   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:11.685352   31648 type.go:168] "Request Body" body=""
	I1003 18:13:11.685433   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:11.685753   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:12.185074   31648 type.go:168] "Request Body" body=""
	I1003 18:13:12.185141   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:12.185467   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:12.685018   31648 type.go:168] "Request Body" body=""
	I1003 18:13:12.685103   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:12.685412   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:12.685475   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:13.184997   31648 type.go:168] "Request Body" body=""
	I1003 18:13:13.185070   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:13.185403   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:13.684967   31648 type.go:168] "Request Body" body=""
	I1003 18:13:13.685061   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:13.685364   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:14.184923   31648 type.go:168] "Request Body" body=""
	I1003 18:13:14.185026   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:14.185364   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:14.685214   31648 type.go:168] "Request Body" body=""
	I1003 18:13:14.685280   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:14.685641   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:14.685714   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:15.185156   31648 type.go:168] "Request Body" body=""
	I1003 18:13:15.185255   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:15.185584   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:15.685142   31648 type.go:168] "Request Body" body=""
	I1003 18:13:15.685204   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:15.685537   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:16.185388   31648 type.go:168] "Request Body" body=""
	I1003 18:13:16.185470   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:16.185814   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:16.685411   31648 type.go:168] "Request Body" body=""
	I1003 18:13:16.685497   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:16.685863   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:16.685936   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:17.185442   31648 type.go:168] "Request Body" body=""
	I1003 18:13:17.185509   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:17.185829   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:17.685415   31648 type.go:168] "Request Body" body=""
	I1003 18:13:17.685525   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:17.685881   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:18.185495   31648 type.go:168] "Request Body" body=""
	I1003 18:13:18.185563   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:18.185876   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:18.685159   31648 type.go:168] "Request Body" body=""
	I1003 18:13:18.685230   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:18.685527   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:19.185084   31648 type.go:168] "Request Body" body=""
	I1003 18:13:19.185161   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:19.185450   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:19.185506   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:19.685103   31648 type.go:168] "Request Body" body=""
	I1003 18:13:19.685191   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:19.685616   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:20.185169   31648 type.go:168] "Request Body" body=""
	I1003 18:13:20.185250   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:20.185540   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:20.685137   31648 type.go:168] "Request Body" body=""
	I1003 18:13:20.685209   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:20.685542   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:21.185328   31648 type.go:168] "Request Body" body=""
	I1003 18:13:21.185409   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:21.185747   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:21.185800   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:21.685330   31648 type.go:168] "Request Body" body=""
	I1003 18:13:21.685393   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:21.685693   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:22.185267   31648 type.go:168] "Request Body" body=""
	I1003 18:13:22.185361   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:22.185713   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:22.685319   31648 type.go:168] "Request Body" body=""
	I1003 18:13:22.685385   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:22.685724   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:23.185388   31648 type.go:168] "Request Body" body=""
	I1003 18:13:23.185472   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:23.185812   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:23.185875   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:23.685447   31648 type.go:168] "Request Body" body=""
	I1003 18:13:23.685515   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:23.685833   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:24.185390   31648 type.go:168] "Request Body" body=""
	I1003 18:13:24.185457   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:24.185762   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:24.685669   31648 type.go:168] "Request Body" body=""
	I1003 18:13:24.685745   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:24.686090   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:25.185723   31648 type.go:168] "Request Body" body=""
	I1003 18:13:25.185792   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:25.186120   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:25.186180   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:25.685886   31648 type.go:168] "Request Body" body=""
	I1003 18:13:25.685961   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:25.686311   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:26.185007   31648 type.go:168] "Request Body" body=""
	I1003 18:13:26.185071   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:26.185380   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:26.684952   31648 type.go:168] "Request Body" body=""
	I1003 18:13:26.685041   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:26.685347   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:27.185970   31648 type.go:168] "Request Body" body=""
	I1003 18:13:27.186046   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:27.186356   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:27.186405   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:27.685041   31648 type.go:168] "Request Body" body=""
	I1003 18:13:27.685106   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:27.685416   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:28.185003   31648 type.go:168] "Request Body" body=""
	I1003 18:13:28.185070   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:28.185403   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:28.684968   31648 type.go:168] "Request Body" body=""
	I1003 18:13:28.685055   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:28.685378   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:29.184912   31648 type.go:168] "Request Body" body=""
	I1003 18:13:29.185004   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:29.185313   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:29.686012   31648 type.go:168] "Request Body" body=""
	I1003 18:13:29.686076   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:29.686383   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:29.686435   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:30.184929   31648 type.go:168] "Request Body" body=""
	I1003 18:13:30.185073   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:30.185387   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:30.684930   31648 type.go:168] "Request Body" body=""
	I1003 18:13:30.685049   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:30.685367   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:31.185212   31648 type.go:168] "Request Body" body=""
	I1003 18:13:31.185277   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:31.185571   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:31.685142   31648 type.go:168] "Request Body" body=""
	I1003 18:13:31.685208   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:31.685504   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:32.185085   31648 type.go:168] "Request Body" body=""
	I1003 18:13:32.185151   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:32.185469   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:32.185524   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:32.685051   31648 type.go:168] "Request Body" body=""
	I1003 18:13:32.685118   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:32.685424   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:33.185022   31648 type.go:168] "Request Body" body=""
	I1003 18:13:33.185092   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:33.185392   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:33.684962   31648 type.go:168] "Request Body" body=""
	I1003 18:13:33.685058   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:33.685365   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:34.184958   31648 type.go:168] "Request Body" body=""
	I1003 18:13:34.185041   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:34.185342   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:34.685149   31648 type.go:168] "Request Body" body=""
	I1003 18:13:34.685221   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:34.685506   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:34.685560   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:35.185096   31648 type.go:168] "Request Body" body=""
	I1003 18:13:35.185162   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:35.185507   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:35.685072   31648 type.go:168] "Request Body" body=""
	I1003 18:13:35.685138   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:35.685436   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:36.185249   31648 type.go:168] "Request Body" body=""
	I1003 18:13:36.185312   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:36.185619   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:36.685207   31648 type.go:168] "Request Body" body=""
	I1003 18:13:36.685270   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:36.685603   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:36.685664   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:37.185187   31648 type.go:168] "Request Body" body=""
	I1003 18:13:37.185258   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:37.185604   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:37.685170   31648 type.go:168] "Request Body" body=""
	I1003 18:13:37.685238   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:37.685540   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:38.185094   31648 type.go:168] "Request Body" body=""
	I1003 18:13:38.185165   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:38.185480   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:38.685085   31648 type.go:168] "Request Body" body=""
	I1003 18:13:38.685154   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:38.685491   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:39.185087   31648 type.go:168] "Request Body" body=""
	I1003 18:13:39.185161   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:39.185473   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:39.185530   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:39.685041   31648 type.go:168] "Request Body" body=""
	I1003 18:13:39.685104   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:39.685443   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:40.184993   31648 type.go:168] "Request Body" body=""
	I1003 18:13:40.185060   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:40.185369   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:40.684957   31648 type.go:168] "Request Body" body=""
	I1003 18:13:40.685046   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:40.685391   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:41.185256   31648 type.go:168] "Request Body" body=""
	I1003 18:13:41.185323   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:41.185632   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:41.185691   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:41.685166   31648 type.go:168] "Request Body" body=""
	I1003 18:13:41.685236   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:41.685524   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:42.185147   31648 type.go:168] "Request Body" body=""
	I1003 18:13:42.185215   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:42.185512   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:42.685072   31648 type.go:168] "Request Body" body=""
	I1003 18:13:42.685137   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:42.685438   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:43.185039   31648 type.go:168] "Request Body" body=""
	I1003 18:13:43.185104   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:43.185400   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:43.684960   31648 type.go:168] "Request Body" body=""
	I1003 18:13:43.685045   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:43.685352   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:43.685405   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:44.184941   31648 type.go:168] "Request Body" body=""
	I1003 18:13:44.185024   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:44.185317   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:44.685052   31648 type.go:168] "Request Body" body=""
	I1003 18:13:44.685120   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:44.685425   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:45.185055   31648 type.go:168] "Request Body" body=""
	I1003 18:13:45.185131   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:45.185445   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:45.685028   31648 type.go:168] "Request Body" body=""
	I1003 18:13:45.685092   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:45.685396   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:45.685450   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:46.185196   31648 type.go:168] "Request Body" body=""
	I1003 18:13:46.185259   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:46.185598   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:46.685146   31648 type.go:168] "Request Body" body=""
	I1003 18:13:46.685207   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:46.685520   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:47.185085   31648 type.go:168] "Request Body" body=""
	I1003 18:13:47.185146   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:47.185435   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:47.685023   31648 type.go:168] "Request Body" body=""
	I1003 18:13:47.685083   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:47.685387   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:48.184938   31648 type.go:168] "Request Body" body=""
	I1003 18:13:48.185024   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:48.185317   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:48.185366   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:48.685968   31648 type.go:168] "Request Body" body=""
	I1003 18:13:48.686071   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:48.686392   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:49.184927   31648 type.go:168] "Request Body" body=""
	I1003 18:13:49.185007   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:49.185301   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:49.685951   31648 type.go:168] "Request Body" body=""
	I1003 18:13:49.686058   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:49.686375   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:50.185987   31648 type.go:168] "Request Body" body=""
	I1003 18:13:50.186049   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:50.186339   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:50.186393   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:50.686008   31648 type.go:168] "Request Body" body=""
	I1003 18:13:50.686095   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:50.686413   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:51.185213   31648 type.go:168] "Request Body" body=""
	I1003 18:13:51.185281   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:51.185558   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:51.685097   31648 type.go:168] "Request Body" body=""
	I1003 18:13:51.685183   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:51.685518   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:52.185069   31648 type.go:168] "Request Body" body=""
	I1003 18:13:52.185132   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:52.185409   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:52.685038   31648 type.go:168] "Request Body" body=""
	I1003 18:13:52.685113   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:52.685416   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:52.685468   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:53.184948   31648 type.go:168] "Request Body" body=""
	I1003 18:13:53.185026   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:53.185309   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:53.685950   31648 type.go:168] "Request Body" body=""
	I1003 18:13:53.686043   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:53.686348   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:54.185948   31648 type.go:168] "Request Body" body=""
	I1003 18:13:54.186022   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:54.186302   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:54.685064   31648 type.go:168] "Request Body" body=""
	I1003 18:13:54.685138   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:54.685429   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:54.685486   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:55.185055   31648 type.go:168] "Request Body" body=""
	I1003 18:13:55.185122   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:55.185388   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:55.685066   31648 type.go:168] "Request Body" body=""
	I1003 18:13:55.685164   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:55.685462   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:56.185338   31648 type.go:168] "Request Body" body=""
	I1003 18:13:56.185406   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:56.185704   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:56.685239   31648 type.go:168] "Request Body" body=""
	I1003 18:13:56.685304   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:56.685629   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:56.685684   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:57.185240   31648 type.go:168] "Request Body" body=""
	I1003 18:13:57.185305   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:57.185635   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:57.685223   31648 type.go:168] "Request Body" body=""
	I1003 18:13:57.685287   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:57.685578   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:58.185123   31648 type.go:168] "Request Body" body=""
	I1003 18:13:58.185189   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:58.185504   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:58.685074   31648 type.go:168] "Request Body" body=""
	I1003 18:13:58.685137   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:58.685464   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:59.185038   31648 type.go:168] "Request Body" body=""
	I1003 18:13:59.185102   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:59.185391   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:59.185441   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:59.684997   31648 type.go:168] "Request Body" body=""
	I1003 18:13:59.685066   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:59.685383   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:00.184957   31648 type.go:168] "Request Body" body=""
	I1003 18:14:00.185041   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:00.185348   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:00.685990   31648 type.go:168] "Request Body" body=""
	I1003 18:14:00.686052   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:00.686352   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:01.185220   31648 type.go:168] "Request Body" body=""
	I1003 18:14:01.185292   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:01.185619   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:14:01.185673   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:14:01.685170   31648 type.go:168] "Request Body" body=""
	I1003 18:14:01.685244   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:01.685572   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:02.185133   31648 type.go:168] "Request Body" body=""
	I1003 18:14:02.185197   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:02.185506   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:02.685118   31648 type.go:168] "Request Body" body=""
	I1003 18:14:02.685184   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:02.685488   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:03.185090   31648 type.go:168] "Request Body" body=""
	I1003 18:14:03.185159   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:03.185488   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:03.685055   31648 type.go:168] "Request Body" body=""
	I1003 18:14:03.685119   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:03.685428   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:14:03.685480   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:14:04.185061   31648 type.go:168] "Request Body" body=""
	I1003 18:14:04.185131   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:04.185458   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:04.685298   31648 type.go:168] "Request Body" body=""
	I1003 18:14:04.685366   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:04.685670   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:05.185278   31648 type.go:168] "Request Body" body=""
	I1003 18:14:05.185348   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:05.185711   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:05.685243   31648 type.go:168] "Request Body" body=""
	I1003 18:14:05.685313   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:05.685621   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:14:05.685670   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:14:06.185390   31648 type.go:168] "Request Body" body=""
	I1003 18:14:06.185454   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:06.185796   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:06.685338   31648 type.go:168] "Request Body" body=""
	I1003 18:14:06.685404   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:06.685744   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:07.185312   31648 type.go:168] "Request Body" body=""
	I1003 18:14:07.185375   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:07.185694   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:07.685319   31648 type.go:168] "Request Body" body=""
	I1003 18:14:07.685388   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:07.685720   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:14:07.685775   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:14:08.185299   31648 type.go:168] "Request Body" body=""
	I1003 18:14:08.185362   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:08.185681   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:08.685362   31648 type.go:168] "Request Body" body=""
	I1003 18:14:08.685501   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:08.686040   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:09.185088   31648 type.go:168] "Request Body" body=""
	I1003 18:14:09.185166   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:09.185492   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:09.685168   31648 type.go:168] "Request Body" body=""
	I1003 18:14:09.685230   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:09.685527   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:10.185203   31648 type.go:168] "Request Body" body=""
	I1003 18:14:10.185266   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:10.185584   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:14:10.185635   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:14:10.685306   31648 type.go:168] "Request Body" body=""
	I1003 18:14:10.685367   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:10.685706   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:11.185477   31648 type.go:168] "Request Body" body=""
	I1003 18:14:11.185545   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:11.185858   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:11.685629   31648 type.go:168] "Request Body" body=""
	I1003 18:14:11.685690   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:11.686017   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:12.185788   31648 type.go:168] "Request Body" body=""
	I1003 18:14:12.185850   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:12.186194   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:14:12.186261   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:14:12.685007   31648 type.go:168] "Request Body" body=""
	I1003 18:14:12.685075   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:12.685367   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:13.185078   31648 type.go:168] "Request Body" body=""
	I1003 18:14:13.185142   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:13.185434   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:13.685146   31648 type.go:168] "Request Body" body=""
	I1003 18:14:13.685215   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:13.685514   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:14.185200   31648 type.go:168] "Request Body" body=""
	I1003 18:14:14.185264   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:14.185577   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:14.685359   31648 type.go:168] "Request Body" body=""
	W1003 18:14:14.685420   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded
	I1003 18:14:14.685433   31648 node_ready.go:38] duration metric: took 6m0.000605507s for node "functional-889240" to be "Ready" ...
	I1003 18:14:14.688030   31648 out.go:203] 
	W1003 18:14:14.689379   31648 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1003 18:14:14.689402   31648 out.go:285] * 
	W1003 18:14:14.691089   31648 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:14:14.693118   31648 out.go:203] 
	
	
	==> CRI-O <==
	Oct 03 18:14:07 functional-889240 crio[2966]: time="2025-10-03T18:14:07.239192107Z" level=info msg="createCtr: removing container 072e4e9460dee9219f80ca505d4733bd0064816e717efde90762b7a102c27e9b" id=559bae75-fd42-4125-8d24-ff6dd69f00d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:07 functional-889240 crio[2966]: time="2025-10-03T18:14:07.23922293Z" level=info msg="createCtr: deleting container 072e4e9460dee9219f80ca505d4733bd0064816e717efde90762b7a102c27e9b from storage" id=559bae75-fd42-4125-8d24-ff6dd69f00d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:07 functional-889240 crio[2966]: time="2025-10-03T18:14:07.241163158Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-889240_kube-system_7e715cb6024854d45a9fa99576167e43_0" id=559bae75-fd42-4125-8d24-ff6dd69f00d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:09 functional-889240 crio[2966]: time="2025-10-03T18:14:09.212175329Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=19d07a45-fb10-41b2-9b94-8181c241e176 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:09 functional-889240 crio[2966]: time="2025-10-03T18:14:09.212940413Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=aa8cac25-319d-432c-a31a-d9b5de82fe6d name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:09 functional-889240 crio[2966]: time="2025-10-03T18:14:09.213820105Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-889240/kube-apiserver" id=18936f50-4957-42d2-bfc0-50b88dc6ed55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:09 functional-889240 crio[2966]: time="2025-10-03T18:14:09.21411552Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:14:09 functional-889240 crio[2966]: time="2025-10-03T18:14:09.218873126Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:14:09 functional-889240 crio[2966]: time="2025-10-03T18:14:09.219323296Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:14:09 functional-889240 crio[2966]: time="2025-10-03T18:14:09.234948332Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=18936f50-4957-42d2-bfc0-50b88dc6ed55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:09 functional-889240 crio[2966]: time="2025-10-03T18:14:09.23631674Z" level=info msg="createCtr: deleting container ID ed3ac05f1b6173e8965eba234b45cb1f88789049f41edd8a04d789a4ba7851fa from idIndex" id=18936f50-4957-42d2-bfc0-50b88dc6ed55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:09 functional-889240 crio[2966]: time="2025-10-03T18:14:09.236349339Z" level=info msg="createCtr: removing container ed3ac05f1b6173e8965eba234b45cb1f88789049f41edd8a04d789a4ba7851fa" id=18936f50-4957-42d2-bfc0-50b88dc6ed55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:09 functional-889240 crio[2966]: time="2025-10-03T18:14:09.236374758Z" level=info msg="createCtr: deleting container ed3ac05f1b6173e8965eba234b45cb1f88789049f41edd8a04d789a4ba7851fa from storage" id=18936f50-4957-42d2-bfc0-50b88dc6ed55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:09 functional-889240 crio[2966]: time="2025-10-03T18:14:09.23828998Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-889240_kube-system_c6bcf20a60b81dff297fc63f5b978297_0" id=18936f50-4957-42d2-bfc0-50b88dc6ed55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:11 functional-889240 crio[2966]: time="2025-10-03T18:14:11.211944062Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=258d909b-abe8-4bab-9eb9-154ce3bd057f name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:11 functional-889240 crio[2966]: time="2025-10-03T18:14:11.212772586Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=5fae6f93-a02d-4605-b1e0-241bc6b01232 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:11 functional-889240 crio[2966]: time="2025-10-03T18:14:11.213529051Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-889240/kube-scheduler" id=d9636d42-c403-4dac-a41c-9ad49be471b7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:11 functional-889240 crio[2966]: time="2025-10-03T18:14:11.213788313Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:14:11 functional-889240 crio[2966]: time="2025-10-03T18:14:11.216948054Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:14:11 functional-889240 crio[2966]: time="2025-10-03T18:14:11.217376826Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:14:11 functional-889240 crio[2966]: time="2025-10-03T18:14:11.236758404Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=d9636d42-c403-4dac-a41c-9ad49be471b7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:11 functional-889240 crio[2966]: time="2025-10-03T18:14:11.238136749Z" level=info msg="createCtr: deleting container ID 37e0b0de5fa174fc2fb7baf919d8bbbe8227a3244b2c4eeb5ab2e0fb435d641d from idIndex" id=d9636d42-c403-4dac-a41c-9ad49be471b7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:11 functional-889240 crio[2966]: time="2025-10-03T18:14:11.23816788Z" level=info msg="createCtr: removing container 37e0b0de5fa174fc2fb7baf919d8bbbe8227a3244b2c4eeb5ab2e0fb435d641d" id=d9636d42-c403-4dac-a41c-9ad49be471b7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:11 functional-889240 crio[2966]: time="2025-10-03T18:14:11.238203696Z" level=info msg="createCtr: deleting container 37e0b0de5fa174fc2fb7baf919d8bbbe8227a3244b2c4eeb5ab2e0fb435d641d from storage" id=d9636d42-c403-4dac-a41c-9ad49be471b7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:11 functional-889240 crio[2966]: time="2025-10-03T18:14:11.240064763Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-889240_kube-system_7dadd1df42d6a2c3d1907f134f7d5ea7_0" id=d9636d42-c403-4dac-a41c-9ad49be471b7 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:14:18.326670    4551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:14:18.327158    4551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:14:18.328643    4551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:14:18.329066    4551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:14:18.330533    4551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 3 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001870] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.374530] i8042: Warning: Keylock active
	[  +0.010846] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003424] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000658] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000637] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000691] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.479345] block sda: the capability attribute has been deprecated.
	[  +0.086934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025583] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.992810] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:14:18 up 56 min,  0 user,  load average: 0.14, 0.03, 0.04
	Linux functional-889240 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 03 18:14:09 functional-889240 kubelet[1817]: E1003 18:14:09.238505    1817 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:14:09 functional-889240 kubelet[1817]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:14:09 functional-889240 kubelet[1817]:  > podSandboxID="bb5ee21569299932af0968d7ca6c3e44bd5f6c5d7c8e5900d54800ccc90ccf96"
	Oct 03 18:14:09 functional-889240 kubelet[1817]: E1003 18:14:09.238632    1817 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:14:09 functional-889240 kubelet[1817]:         container kube-apiserver start failed in pod kube-apiserver-functional-889240_kube-system(c6bcf20a60b81dff297fc63f5b978297): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:14:09 functional-889240 kubelet[1817]:  > logger="UnhandledError"
	Oct 03 18:14:09 functional-889240 kubelet[1817]: E1003 18:14:09.238673    1817 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-889240" podUID="c6bcf20a60b81dff297fc63f5b978297"
	Oct 03 18:14:09 functional-889240 kubelet[1817]: E1003 18:14:09.250684    1817 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-889240\" not found"
	Oct 03 18:14:09 functional-889240 kubelet[1817]: E1003 18:14:09.387666    1817 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 03 18:14:09 functional-889240 kubelet[1817]: E1003 18:14:09.890345    1817 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-889240?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 03 18:14:10 functional-889240 kubelet[1817]: I1003 18:14:10.086625    1817 kubelet_node_status.go:75] "Attempting to register node" node="functional-889240"
	Oct 03 18:14:10 functional-889240 kubelet[1817]: E1003 18:14:10.087008    1817 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-889240"
	Oct 03 18:14:11 functional-889240 kubelet[1817]: E1003 18:14:11.211567    1817 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-889240\" not found" node="functional-889240"
	Oct 03 18:14:11 functional-889240 kubelet[1817]: E1003 18:14:11.240340    1817 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:14:11 functional-889240 kubelet[1817]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:14:11 functional-889240 kubelet[1817]:  > podSandboxID="9ea0d784c2fd12bcd1db05033ba2964baa15be14deeae00b6508f924c37e3473"
	Oct 03 18:14:11 functional-889240 kubelet[1817]: E1003 18:14:11.240438    1817 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:14:11 functional-889240 kubelet[1817]:         container kube-scheduler start failed in pod kube-scheduler-functional-889240_kube-system(7dadd1df42d6a2c3d1907f134f7d5ea7): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:14:11 functional-889240 kubelet[1817]:  > logger="UnhandledError"
	Oct 03 18:14:11 functional-889240 kubelet[1817]: E1003 18:14:11.240489    1817 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-889240" podUID="7dadd1df42d6a2c3d1907f134f7d5ea7"
	Oct 03 18:14:11 functional-889240 kubelet[1817]: E1003 18:14:11.497556    1817 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-889240.186b0d404ae58a04\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-889240.186b0d404ae58a04  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-889240,UID:functional-889240,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-889240 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-889240,},FirstTimestamp:2025-10-03 18:04:09.203935748 +0000 UTC m=+0.376858749,LastTimestamp:2025-10-03 18:04:09.206706066 +0000 UTC m=+0.379629064,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,Reporting
Instance:functional-889240,}"
	Oct 03 18:14:16 functional-889240 kubelet[1817]: E1003 18:14:16.025257    1817 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 03 18:14:16 functional-889240 kubelet[1817]: E1003 18:14:16.890931    1817 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-889240?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 03 18:14:17 functional-889240 kubelet[1817]: I1003 18:14:17.088611    1817 kubelet_node_status.go:75] "Attempting to register node" node="functional-889240"
	Oct 03 18:14:17 functional-889240 kubelet[1817]: E1003 18:14:17.089015    1817 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-889240"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-889240 -n functional-889240
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-889240 -n functional-889240: exit status 2 (299.066823ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-889240" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (1.98s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (2.06s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 kubectl -- --context functional-889240 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-889240 kubectl -- --context functional-889240 get pods: exit status 1 (101.149063ms)

                                                
                                                
** stderr ** 
	E1003 18:14:24.675706   37105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:14:24.676026   37105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:14:24.677387   37105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:14:24.677712   37105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:14:24.679063   37105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-amd64 -p functional-889240 kubectl -- --context functional-889240 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-889240
helpers_test.go:243: (dbg) docker inspect functional-889240:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a",
	        "Created": "2025-10-03T17:59:56.619817507Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 26766,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T17:59:56.652603806Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/hostname",
	        "HostsPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/hosts",
	        "LogPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a-json.log",
	        "Name": "/functional-889240",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-889240:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-889240",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a",
	                "LowerDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2-init/diff:/var/lib/docker/overlay2/6a517a7375440eba803d7b83fe1e0821915758396dd4d8556ab64fff322a60c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-889240",
	                "Source": "/var/lib/docker/volumes/functional-889240/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-889240",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-889240",
	                "name.minikube.sigs.k8s.io": "functional-889240",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "da15d31dc23bdd4694ae9e3b61015d7ce0d61668c73d3e386422834c6f0321d8",
	            "SandboxKey": "/var/run/docker/netns/da15d31dc23b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-889240": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:9e:1d:e9:d9:ce",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "03281bed183d0817c0bc237b5c25093fc10222138aedde4c7deef5823759fa24",
	                    "EndpointID": "28fa584fdd6e253816ae08a2460ef02b91085c8a7996d55008876e3bd65bbc7e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-889240",
	                        "9f4f0f10b4a9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-889240 -n functional-889240
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-889240 -n functional-889240: exit status 2 (286.884371ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 logs -n 25
helpers_test.go:260: TestFunctional/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ pause   │ nospam-093146 --log_dir /tmp/nospam-093146 pause                                                              │ nospam-093146     │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ unpause │ nospam-093146 --log_dir /tmp/nospam-093146 unpause                                                            │ nospam-093146     │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ unpause │ nospam-093146 --log_dir /tmp/nospam-093146 unpause                                                            │ nospam-093146     │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ unpause │ nospam-093146 --log_dir /tmp/nospam-093146 unpause                                                            │ nospam-093146     │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ stop    │ nospam-093146 --log_dir /tmp/nospam-093146 stop                                                               │ nospam-093146     │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ stop    │ nospam-093146 --log_dir /tmp/nospam-093146 stop                                                               │ nospam-093146     │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ stop    │ nospam-093146 --log_dir /tmp/nospam-093146 stop                                                               │ nospam-093146     │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ delete  │ -p nospam-093146                                                                                              │ nospam-093146     │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ start   │ -p functional-889240 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │                     │
	│ start   │ -p functional-889240 --alsologtostderr -v=8                                                                   │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:08 UTC │                     │
	│ cache   │ functional-889240 cache add registry.k8s.io/pause:3.1                                                         │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ cache   │ functional-889240 cache add registry.k8s.io/pause:3.3                                                         │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ cache   │ functional-889240 cache add registry.k8s.io/pause:latest                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ cache   │ functional-889240 cache add minikube-local-cache-test:functional-889240                                       │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ cache   │ functional-889240 cache delete minikube-local-cache-test:functional-889240                                    │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ ssh     │ functional-889240 ssh sudo crictl images                                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ ssh     │ functional-889240 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ ssh     │ functional-889240 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │                     │
	│ cache   │ functional-889240 cache reload                                                                                │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ ssh     │ functional-889240 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ kubectl │ functional-889240 kubectl -- --context functional-889240 get pods                                             │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:08:11
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:08:11.068231   31648 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:08:11.068486   31648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:08:11.068496   31648 out.go:374] Setting ErrFile to fd 2...
	I1003 18:08:11.068502   31648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:08:11.068729   31648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:08:11.069215   31648 out.go:368] Setting JSON to false
	I1003 18:08:11.070085   31648 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3042,"bootTime":1759511849,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:08:11.070168   31648 start.go:140] virtualization: kvm guest
	I1003 18:08:11.073397   31648 out.go:179] * [functional-889240] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 18:08:11.074567   31648 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:08:11.074571   31648 notify.go:220] Checking for updates...
	I1003 18:08:11.077123   31648 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:08:11.078380   31648 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:08:11.079542   31648 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:08:11.080665   31648 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:08:11.081754   31648 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:08:11.083246   31648 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:08:11.083337   31648 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:08:11.109195   31648 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:08:11.109276   31648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:08:11.161161   31648 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-03 18:08:11.151693527 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:08:11.161260   31648 docker.go:318] overlay module found
	I1003 18:08:11.162933   31648 out.go:179] * Using the docker driver based on existing profile
	I1003 18:08:11.164103   31648 start.go:304] selected driver: docker
	I1003 18:08:11.164115   31648 start.go:924] validating driver "docker" against &{Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:08:11.164183   31648 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:08:11.164266   31648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:08:11.217384   31648 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-03 18:08:11.207171248 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:08:11.218094   31648 cni.go:84] Creating CNI manager for ""
	I1003 18:08:11.218156   31648 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 18:08:11.218200   31648 start.go:348] cluster config:
	{Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:08:11.220110   31648 out.go:179] * Starting "functional-889240" primary control-plane node in "functional-889240" cluster
	I1003 18:08:11.221257   31648 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:08:11.222336   31648 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:08:11.223595   31648 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:08:11.223644   31648 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 18:08:11.223654   31648 cache.go:58] Caching tarball of preloaded images
	I1003 18:08:11.223686   31648 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:08:11.223758   31648 preload.go:233] Found /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1003 18:08:11.223772   31648 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 18:08:11.223859   31648 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/config.json ...
	I1003 18:08:11.242913   31648 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 18:08:11.242930   31648 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 18:08:11.242946   31648 cache.go:232] Successfully downloaded all kic artifacts
	I1003 18:08:11.242988   31648 start.go:360] acquireMachinesLock for functional-889240: {Name:mk6750a9fb1c1c3747b0abf2aebe2a2d0047ae3a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:08:11.243063   31648 start.go:364] duration metric: took 50.516µs to acquireMachinesLock for "functional-889240"
	I1003 18:08:11.243090   31648 start.go:96] Skipping create...Using existing machine configuration
	I1003 18:08:11.243097   31648 fix.go:54] fixHost starting: 
	I1003 18:08:11.243298   31648 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
	I1003 18:08:11.259925   31648 fix.go:112] recreateIfNeeded on functional-889240: state=Running err=<nil>
	W1003 18:08:11.259951   31648 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 18:08:11.261699   31648 out.go:252] * Updating the running docker "functional-889240" container ...
	I1003 18:08:11.261731   31648 machine.go:93] provisionDockerMachine start ...
	I1003 18:08:11.261806   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:11.278828   31648 main.go:141] libmachine: Using SSH client type: native
	I1003 18:08:11.279109   31648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:08:11.279121   31648 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 18:08:11.421621   31648 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-889240
	
	I1003 18:08:11.421642   31648 ubuntu.go:182] provisioning hostname "functional-889240"
	I1003 18:08:11.421693   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:11.439154   31648 main.go:141] libmachine: Using SSH client type: native
	I1003 18:08:11.439372   31648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:08:11.439384   31648 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-889240 && echo "functional-889240" | sudo tee /etc/hostname
	I1003 18:08:11.590164   31648 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-889240
	
	I1003 18:08:11.590238   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:11.607612   31648 main.go:141] libmachine: Using SSH client type: native
	I1003 18:08:11.607822   31648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:08:11.607839   31648 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-889240' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-889240/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-889240' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:08:11.750385   31648 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:08:11.750412   31648 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8669/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8669/.minikube}
	I1003 18:08:11.750443   31648 ubuntu.go:190] setting up certificates
	I1003 18:08:11.750454   31648 provision.go:84] configureAuth start
	I1003 18:08:11.750512   31648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-889240
	I1003 18:08:11.767416   31648 provision.go:143] copyHostCerts
	I1003 18:08:11.767453   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:08:11.767484   31648 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem, removing ...
	I1003 18:08:11.767498   31648 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:08:11.767564   31648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem (1082 bytes)
	I1003 18:08:11.767659   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:08:11.767679   31648 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem, removing ...
	I1003 18:08:11.767686   31648 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:08:11.767714   31648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem (1123 bytes)
	I1003 18:08:11.767934   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:08:11.768183   31648 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem, removing ...
	I1003 18:08:11.768200   31648 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:08:11.768251   31648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem (1675 bytes)
	I1003 18:08:11.768350   31648 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem org=jenkins.functional-889240 san=[127.0.0.1 192.168.49.2 functional-889240 localhost minikube]
	I1003 18:08:11.920440   31648 provision.go:177] copyRemoteCerts
	I1003 18:08:11.920514   31648 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:08:11.920551   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:11.938061   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:12.037875   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 18:08:12.037937   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1003 18:08:12.054720   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 18:08:12.054773   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 18:08:12.071055   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 18:08:12.071110   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 18:08:12.087547   31648 provision.go:87] duration metric: took 337.079976ms to configureAuth
	I1003 18:08:12.087574   31648 ubuntu.go:206] setting minikube options for container-runtime
	I1003 18:08:12.087766   31648 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:08:12.087867   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:12.105048   31648 main.go:141] libmachine: Using SSH client type: native
	I1003 18:08:12.105289   31648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:08:12.105305   31648 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 18:08:12.366340   31648 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 18:08:12.366367   31648 machine.go:96] duration metric: took 1.104629442s to provisionDockerMachine
	I1003 18:08:12.366377   31648 start.go:293] postStartSetup for "functional-889240" (driver="docker")
	I1003 18:08:12.366388   31648 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:08:12.366431   31648 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:08:12.366476   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:12.383468   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:12.483988   31648 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:08:12.487264   31648 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1003 18:08:12.487282   31648 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1003 18:08:12.487289   31648 command_runner.go:130] > VERSION_ID="12"
	I1003 18:08:12.487295   31648 command_runner.go:130] > VERSION="12 (bookworm)"
	I1003 18:08:12.487301   31648 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1003 18:08:12.487306   31648 command_runner.go:130] > ID=debian
	I1003 18:08:12.487313   31648 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1003 18:08:12.487320   31648 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1003 18:08:12.487329   31648 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1003 18:08:12.487402   31648 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:08:12.487425   31648 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 18:08:12.487438   31648 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/addons for local assets ...
	I1003 18:08:12.487491   31648 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/files for local assets ...
	I1003 18:08:12.487581   31648 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> 122122.pem in /etc/ssl/certs
	I1003 18:08:12.487593   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /etc/ssl/certs/122122.pem
	I1003 18:08:12.487688   31648 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/test/nested/copy/12212/hosts -> hosts in /etc/test/nested/copy/12212
	I1003 18:08:12.487697   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/test/nested/copy/12212/hosts -> /etc/test/nested/copy/12212/hosts
	I1003 18:08:12.487740   31648 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/12212
	I1003 18:08:12.495127   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:08:12.511597   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/test/nested/copy/12212/hosts --> /etc/test/nested/copy/12212/hosts (40 bytes)
	I1003 18:08:12.528571   31648 start.go:296] duration metric: took 162.180752ms for postStartSetup
	I1003 18:08:12.528647   31648 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:08:12.528710   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:12.546258   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:12.643641   31648 command_runner.go:130] > 39%
	I1003 18:08:12.643858   31648 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:08:12.648017   31648 command_runner.go:130] > 179G
	I1003 18:08:12.648284   31648 fix.go:56] duration metric: took 1.405183874s for fixHost
	I1003 18:08:12.648303   31648 start.go:83] releasing machines lock for "functional-889240", held for 1.405223544s
	I1003 18:08:12.648364   31648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-889240
	I1003 18:08:12.665548   31648 ssh_runner.go:195] Run: cat /version.json
	I1003 18:08:12.665589   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:12.665627   31648 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:08:12.665684   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:12.683771   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:12.684037   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:12.833728   31648 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1003 18:08:12.833784   31648 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759382731-21643", "minikube_version": "v1.37.0", "commit": "b0c70dd4d342e6443a02916e52d246d8cdb181c4"}
	I1003 18:08:12.833903   31648 ssh_runner.go:195] Run: systemctl --version
	I1003 18:08:12.840008   31648 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1003 18:08:12.840056   31648 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1003 18:08:12.840282   31648 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 18:08:12.874135   31648 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1003 18:08:12.878285   31648 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1003 18:08:12.878575   31648 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 18:08:12.878637   31648 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 18:08:12.886227   31648 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1003 18:08:12.886250   31648 start.go:495] detecting cgroup driver to use...
	I1003 18:08:12.886282   31648 detect.go:190] detected "systemd" cgroup driver on host os
	I1003 18:08:12.886327   31648 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 18:08:12.900106   31648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 18:08:12.911429   31648 docker.go:218] disabling cri-docker service (if available) ...
	I1003 18:08:12.911477   31648 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 18:08:12.925289   31648 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 18:08:12.936739   31648 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 18:08:13.020667   31648 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 18:08:13.102263   31648 docker.go:234] disabling docker service ...
	I1003 18:08:13.102328   31648 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 18:08:13.115759   31648 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 18:08:13.127581   31648 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 18:08:13.208801   31648 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 18:08:13.298232   31648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 18:08:13.314511   31648 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:08:13.327949   31648 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1003 18:08:13.328859   31648 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 18:08:13.328914   31648 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.337658   31648 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 18:08:13.337709   31648 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.346162   31648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.354712   31648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.363098   31648 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:08:13.370793   31648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.378940   31648 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.386700   31648 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.394938   31648 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:08:13.401467   31648 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1003 18:08:13.402164   31648 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:08:13.409040   31648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:08:13.496423   31648 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 18:08:13.599891   31648 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 18:08:13.599956   31648 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 18:08:13.603739   31648 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1003 18:08:13.603760   31648 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1003 18:08:13.603769   31648 command_runner.go:130] > Device: 0,59	Inode: 3868        Links: 1
	I1003 18:08:13.603779   31648 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1003 18:08:13.603787   31648 command_runner.go:130] > Access: 2025-10-03 18:08:13.582699245 +0000
	I1003 18:08:13.603796   31648 command_runner.go:130] > Modify: 2025-10-03 18:08:13.582699245 +0000
	I1003 18:08:13.603806   31648 command_runner.go:130] > Change: 2025-10-03 18:08:13.582699245 +0000
	I1003 18:08:13.603811   31648 command_runner.go:130] >  Birth: 2025-10-03 18:08:13.582699245 +0000
	I1003 18:08:13.603837   31648 start.go:563] Will wait 60s for crictl version
	I1003 18:08:13.603884   31648 ssh_runner.go:195] Run: which crictl
	I1003 18:08:13.607403   31648 command_runner.go:130] > /usr/local/bin/crictl
	I1003 18:08:13.607458   31648 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 18:08:13.630641   31648 command_runner.go:130] > Version:  0.1.0
	I1003 18:08:13.630667   31648 command_runner.go:130] > RuntimeName:  cri-o
	I1003 18:08:13.630673   31648 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1003 18:08:13.630680   31648 command_runner.go:130] > RuntimeApiVersion:  v1
	I1003 18:08:13.630699   31648 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 18:08:13.630764   31648 ssh_runner.go:195] Run: crio --version
	I1003 18:08:13.656303   31648 command_runner.go:130] > crio version 1.34.1
	I1003 18:08:13.656324   31648 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1003 18:08:13.656329   31648 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1003 18:08:13.656339   31648 command_runner.go:130] >    GitTreeState:   dirty
	I1003 18:08:13.656344   31648 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1003 18:08:13.656348   31648 command_runner.go:130] >    GoVersion:      go1.24.6
	I1003 18:08:13.656352   31648 command_runner.go:130] >    Compiler:       gc
	I1003 18:08:13.656365   31648 command_runner.go:130] >    Platform:       linux/amd64
	I1003 18:08:13.656372   31648 command_runner.go:130] >    Linkmode:       static
	I1003 18:08:13.656378   31648 command_runner.go:130] >    BuildTags:
	I1003 18:08:13.656383   31648 command_runner.go:130] >      static
	I1003 18:08:13.656387   31648 command_runner.go:130] >      netgo
	I1003 18:08:13.656393   31648 command_runner.go:130] >      osusergo
	I1003 18:08:13.656396   31648 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1003 18:08:13.656402   31648 command_runner.go:130] >      seccomp
	I1003 18:08:13.656405   31648 command_runner.go:130] >      apparmor
	I1003 18:08:13.656410   31648 command_runner.go:130] >      selinux
	I1003 18:08:13.656415   31648 command_runner.go:130] >    LDFlags:          unknown
	I1003 18:08:13.656421   31648 command_runner.go:130] >    SeccompEnabled:   true
	I1003 18:08:13.656426   31648 command_runner.go:130] >    AppArmorEnabled:  false
	I1003 18:08:13.657588   31648 ssh_runner.go:195] Run: crio --version
	I1003 18:08:13.682656   31648 command_runner.go:130] > crio version 1.34.1
	I1003 18:08:13.682693   31648 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1003 18:08:13.682698   31648 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1003 18:08:13.682703   31648 command_runner.go:130] >    GitTreeState:   dirty
	I1003 18:08:13.682708   31648 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1003 18:08:13.682712   31648 command_runner.go:130] >    GoVersion:      go1.24.6
	I1003 18:08:13.682716   31648 command_runner.go:130] >    Compiler:       gc
	I1003 18:08:13.682720   31648 command_runner.go:130] >    Platform:       linux/amd64
	I1003 18:08:13.682724   31648 command_runner.go:130] >    Linkmode:       static
	I1003 18:08:13.682728   31648 command_runner.go:130] >    BuildTags:
	I1003 18:08:13.682733   31648 command_runner.go:130] >      static
	I1003 18:08:13.682737   31648 command_runner.go:130] >      netgo
	I1003 18:08:13.682741   31648 command_runner.go:130] >      osusergo
	I1003 18:08:13.682746   31648 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1003 18:08:13.682753   31648 command_runner.go:130] >      seccomp
	I1003 18:08:13.682756   31648 command_runner.go:130] >      apparmor
	I1003 18:08:13.682759   31648 command_runner.go:130] >      selinux
	I1003 18:08:13.682763   31648 command_runner.go:130] >    LDFlags:          unknown
	I1003 18:08:13.682770   31648 command_runner.go:130] >    SeccompEnabled:   true
	I1003 18:08:13.682774   31648 command_runner.go:130] >    AppArmorEnabled:  false
	I1003 18:08:13.685817   31648 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 18:08:13.686852   31648 cli_runner.go:164] Run: docker network inspect functional-889240 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:08:13.703291   31648 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 18:08:13.707207   31648 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1003 18:08:13.707295   31648 kubeadm.go:883] updating cluster {Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 18:08:13.707417   31648 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:08:13.707473   31648 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:08:13.737725   31648 command_runner.go:130] > {
	I1003 18:08:13.737745   31648 command_runner.go:130] >   "images":  [
	I1003 18:08:13.737749   31648 command_runner.go:130] >     {
	I1003 18:08:13.737755   31648 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1003 18:08:13.737763   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.737773   31648 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1003 18:08:13.737780   31648 command_runner.go:130] >       ],
	I1003 18:08:13.737786   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.737798   31648 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1003 18:08:13.737807   31648 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1003 18:08:13.737811   31648 command_runner.go:130] >       ],
	I1003 18:08:13.737815   31648 command_runner.go:130] >       "size":  "109379124",
	I1003 18:08:13.737819   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.737828   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.737832   31648 command_runner.go:130] >     },
	I1003 18:08:13.737835   31648 command_runner.go:130] >     {
	I1003 18:08:13.737841   31648 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1003 18:08:13.737848   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.737859   31648 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1003 18:08:13.737868   31648 command_runner.go:130] >       ],
	I1003 18:08:13.737875   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.737886   31648 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1003 18:08:13.737898   31648 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1003 18:08:13.737904   31648 command_runner.go:130] >       ],
	I1003 18:08:13.737908   31648 command_runner.go:130] >       "size":  "31470524",
	I1003 18:08:13.737914   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.737920   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.737931   31648 command_runner.go:130] >     },
	I1003 18:08:13.737939   31648 command_runner.go:130] >     {
	I1003 18:08:13.737948   31648 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1003 18:08:13.737958   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.737969   31648 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1003 18:08:13.737987   31648 command_runner.go:130] >       ],
	I1003 18:08:13.737995   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738007   31648 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1003 18:08:13.738023   31648 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1003 18:08:13.738031   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738037   31648 command_runner.go:130] >       "size":  "76103547",
	I1003 18:08:13.738045   31648 command_runner.go:130] >       "username":  "nonroot",
	I1003 18:08:13.738049   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.738054   31648 command_runner.go:130] >     },
	I1003 18:08:13.738058   31648 command_runner.go:130] >     {
	I1003 18:08:13.738070   31648 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1003 18:08:13.738081   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.738091   31648 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1003 18:08:13.738100   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738110   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738124   31648 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1003 18:08:13.738137   31648 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1003 18:08:13.738143   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738148   31648 command_runner.go:130] >       "size":  "195976448",
	I1003 18:08:13.738155   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.738165   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.738175   31648 command_runner.go:130] >       },
	I1003 18:08:13.738187   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.738197   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.738205   31648 command_runner.go:130] >     },
	I1003 18:08:13.738212   31648 command_runner.go:130] >     {
	I1003 18:08:13.738223   31648 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1003 18:08:13.738230   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.738236   31648 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1003 18:08:13.738245   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738256   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738270   31648 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1003 18:08:13.738285   31648 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1003 18:08:13.738293   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738301   31648 command_runner.go:130] >       "size":  "89046001",
	I1003 18:08:13.738308   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.738312   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.738315   31648 command_runner.go:130] >       },
	I1003 18:08:13.738320   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.738329   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.738338   31648 command_runner.go:130] >     },
	I1003 18:08:13.738344   31648 command_runner.go:130] >     {
	I1003 18:08:13.738357   31648 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1003 18:08:13.738366   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.738377   31648 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1003 18:08:13.738386   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738395   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738402   31648 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1003 18:08:13.738418   31648 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1003 18:08:13.738427   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738434   31648 command_runner.go:130] >       "size":  "76004181",
	I1003 18:08:13.738443   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.738453   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.738460   31648 command_runner.go:130] >       },
	I1003 18:08:13.738467   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.738475   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.738480   31648 command_runner.go:130] >     },
	I1003 18:08:13.738484   31648 command_runner.go:130] >     {
	I1003 18:08:13.738493   31648 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1003 18:08:13.738502   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.738514   31648 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1003 18:08:13.738522   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738531   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738545   31648 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1003 18:08:13.738560   31648 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1003 18:08:13.738568   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738572   31648 command_runner.go:130] >       "size":  "73138073",
	I1003 18:08:13.738580   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.738586   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.738595   31648 command_runner.go:130] >     },
	I1003 18:08:13.738605   31648 command_runner.go:130] >     {
	I1003 18:08:13.738617   31648 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1003 18:08:13.738625   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.738634   31648 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1003 18:08:13.738642   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738648   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738658   31648 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1003 18:08:13.738674   31648 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1003 18:08:13.738683   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738693   31648 command_runner.go:130] >       "size":  "53844823",
	I1003 18:08:13.738702   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.738710   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.738718   31648 command_runner.go:130] >       },
	I1003 18:08:13.738724   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.738733   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.738743   31648 command_runner.go:130] >     },
	I1003 18:08:13.738747   31648 command_runner.go:130] >     {
	I1003 18:08:13.738756   31648 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1003 18:08:13.738766   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.738777   31648 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1003 18:08:13.738785   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738792   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738806   31648 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1003 18:08:13.738819   31648 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1003 18:08:13.738827   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738832   31648 command_runner.go:130] >       "size":  "742092",
	I1003 18:08:13.738838   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.738843   31648 command_runner.go:130] >         "value":  "65535"
	I1003 18:08:13.738851   31648 command_runner.go:130] >       },
	I1003 18:08:13.738862   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.738871   31648 command_runner.go:130] >       "pinned":  true
	I1003 18:08:13.738885   31648 command_runner.go:130] >     }
	I1003 18:08:13.738890   31648 command_runner.go:130] >   ]
	I1003 18:08:13.738898   31648 command_runner.go:130] > }
	I1003 18:08:13.739109   31648 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:08:13.739126   31648 crio.go:433] Images already preloaded, skipping extraction
	I1003 18:08:13.739173   31648 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:08:13.761526   31648 command_runner.go:130] > {
	I1003 18:08:13.761550   31648 command_runner.go:130] >   "images":  [
	I1003 18:08:13.761558   31648 command_runner.go:130] >     {
	I1003 18:08:13.761569   31648 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1003 18:08:13.761577   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.761586   31648 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1003 18:08:13.761592   31648 command_runner.go:130] >       ],
	I1003 18:08:13.761599   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.761616   31648 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1003 18:08:13.761631   31648 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1003 18:08:13.761639   31648 command_runner.go:130] >       ],
	I1003 18:08:13.761646   31648 command_runner.go:130] >       "size":  "109379124",
	I1003 18:08:13.761659   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.761672   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.761681   31648 command_runner.go:130] >     },
	I1003 18:08:13.761686   31648 command_runner.go:130] >     {
	I1003 18:08:13.761698   31648 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1003 18:08:13.761708   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.761719   31648 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1003 18:08:13.761728   31648 command_runner.go:130] >       ],
	I1003 18:08:13.761737   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.761753   31648 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1003 18:08:13.761770   31648 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1003 18:08:13.761779   31648 command_runner.go:130] >       ],
	I1003 18:08:13.761789   31648 command_runner.go:130] >       "size":  "31470524",
	I1003 18:08:13.761799   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.761810   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.761818   31648 command_runner.go:130] >     },
	I1003 18:08:13.761823   31648 command_runner.go:130] >     {
	I1003 18:08:13.761836   31648 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1003 18:08:13.761845   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.761852   31648 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1003 18:08:13.761860   31648 command_runner.go:130] >       ],
	I1003 18:08:13.761866   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.761879   31648 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1003 18:08:13.761889   31648 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1003 18:08:13.761897   31648 command_runner.go:130] >       ],
	I1003 18:08:13.761903   31648 command_runner.go:130] >       "size":  "76103547",
	I1003 18:08:13.761913   31648 command_runner.go:130] >       "username":  "nonroot",
	I1003 18:08:13.761922   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.761934   31648 command_runner.go:130] >     },
	I1003 18:08:13.761942   31648 command_runner.go:130] >     {
	I1003 18:08:13.761952   31648 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1003 18:08:13.761960   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.761970   31648 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1003 18:08:13.762000   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762008   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.762019   31648 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1003 18:08:13.762032   31648 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1003 18:08:13.762041   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762051   31648 command_runner.go:130] >       "size":  "195976448",
	I1003 18:08:13.762060   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.762068   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.762074   31648 command_runner.go:130] >       },
	I1003 18:08:13.762087   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.762097   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.762101   31648 command_runner.go:130] >     },
	I1003 18:08:13.762109   31648 command_runner.go:130] >     {
	I1003 18:08:13.762117   31648 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1003 18:08:13.762126   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.762135   31648 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1003 18:08:13.762143   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762149   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.762163   31648 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1003 18:08:13.762178   31648 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1003 18:08:13.762186   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762193   31648 command_runner.go:130] >       "size":  "89046001",
	I1003 18:08:13.762202   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.762212   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.762221   31648 command_runner.go:130] >       },
	I1003 18:08:13.762229   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.762239   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.762248   31648 command_runner.go:130] >     },
	I1003 18:08:13.762256   31648 command_runner.go:130] >     {
	I1003 18:08:13.762265   31648 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1003 18:08:13.762275   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.762284   31648 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1003 18:08:13.762292   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762303   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.762319   31648 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1003 18:08:13.762335   31648 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1003 18:08:13.762343   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762353   31648 command_runner.go:130] >       "size":  "76004181",
	I1003 18:08:13.762361   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.762367   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.762374   31648 command_runner.go:130] >       },
	I1003 18:08:13.762380   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.762388   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.762392   31648 command_runner.go:130] >     },
	I1003 18:08:13.762401   31648 command_runner.go:130] >     {
	I1003 18:08:13.762412   31648 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1003 18:08:13.762422   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.762431   31648 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1003 18:08:13.762438   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762444   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.762456   31648 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1003 18:08:13.762468   31648 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1003 18:08:13.762477   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762487   31648 command_runner.go:130] >       "size":  "73138073",
	I1003 18:08:13.762497   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.762506   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.762515   31648 command_runner.go:130] >     },
	I1003 18:08:13.762523   31648 command_runner.go:130] >     {
	I1003 18:08:13.762533   31648 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1003 18:08:13.762539   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.762547   31648 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1003 18:08:13.762552   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762559   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.762570   31648 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1003 18:08:13.762593   31648 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1003 18:08:13.762602   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762608   31648 command_runner.go:130] >       "size":  "53844823",
	I1003 18:08:13.762616   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.762623   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.762630   31648 command_runner.go:130] >       },
	I1003 18:08:13.762636   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.762645   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.762653   31648 command_runner.go:130] >     },
	I1003 18:08:13.762657   31648 command_runner.go:130] >     {
	I1003 18:08:13.762665   31648 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1003 18:08:13.762671   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.762681   31648 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1003 18:08:13.762686   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762695   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.762706   31648 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1003 18:08:13.762720   31648 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1003 18:08:13.762728   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762732   31648 command_runner.go:130] >       "size":  "742092",
	I1003 18:08:13.762737   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.762742   31648 command_runner.go:130] >         "value":  "65535"
	I1003 18:08:13.762747   31648 command_runner.go:130] >       },
	I1003 18:08:13.762751   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.762757   31648 command_runner.go:130] >       "pinned":  true
	I1003 18:08:13.762761   31648 command_runner.go:130] >     }
	I1003 18:08:13.762766   31648 command_runner.go:130] >   ]
	I1003 18:08:13.762769   31648 command_runner.go:130] > }
	I1003 18:08:13.763568   31648 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:08:13.763587   31648 cache_images.go:85] Images are preloaded, skipping loading
	I1003 18:08:13.763596   31648 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1003 18:08:13.763703   31648 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-889240 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 18:08:13.763779   31648 ssh_runner.go:195] Run: crio config
	I1003 18:08:13.802487   31648 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1003 18:08:13.802512   31648 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1003 18:08:13.802523   31648 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1003 18:08:13.802528   31648 command_runner.go:130] > #
	I1003 18:08:13.802538   31648 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1003 18:08:13.802546   31648 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1003 18:08:13.802555   31648 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1003 18:08:13.802566   31648 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1003 18:08:13.802572   31648 command_runner.go:130] > # reload'.
	I1003 18:08:13.802583   31648 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1003 18:08:13.802595   31648 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1003 18:08:13.802606   31648 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1003 18:08:13.802615   31648 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1003 18:08:13.802622   31648 command_runner.go:130] > [crio]
	I1003 18:08:13.802632   31648 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1003 18:08:13.802640   31648 command_runner.go:130] > # containers images, in this directory.
	I1003 18:08:13.802653   31648 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1003 18:08:13.802671   31648 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1003 18:08:13.802680   31648 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1003 18:08:13.802693   31648 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1003 18:08:13.802704   31648 command_runner.go:130] > # imagestore = ""
	I1003 18:08:13.802714   31648 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1003 18:08:13.802726   31648 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1003 18:08:13.802736   31648 command_runner.go:130] > # storage_driver = "overlay"
	I1003 18:08:13.802747   31648 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1003 18:08:13.802761   31648 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1003 18:08:13.802770   31648 command_runner.go:130] > # storage_option = [
	I1003 18:08:13.802777   31648 command_runner.go:130] > # ]
	I1003 18:08:13.802788   31648 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1003 18:08:13.802800   31648 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1003 18:08:13.802808   31648 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1003 18:08:13.802820   31648 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1003 18:08:13.802830   31648 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1003 18:08:13.802835   31648 command_runner.go:130] > # always happen on a node reboot
	I1003 18:08:13.802840   31648 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1003 18:08:13.802849   31648 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1003 18:08:13.802860   31648 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1003 18:08:13.802865   31648 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1003 18:08:13.802871   31648 command_runner.go:130] > # version_file_persist = ""
	I1003 18:08:13.802882   31648 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1003 18:08:13.802899   31648 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1003 18:08:13.802906   31648 command_runner.go:130] > # internal_wipe = true
	I1003 18:08:13.802917   31648 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1003 18:08:13.802929   31648 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1003 18:08:13.802935   31648 command_runner.go:130] > # internal_repair = true
	I1003 18:08:13.802943   31648 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1003 18:08:13.802953   31648 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1003 18:08:13.802966   31648 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1003 18:08:13.802985   31648 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1003 18:08:13.802996   31648 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1003 18:08:13.803006   31648 command_runner.go:130] > [crio.api]
	I1003 18:08:13.803015   31648 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1003 18:08:13.803025   31648 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1003 18:08:13.803033   31648 command_runner.go:130] > # IP address on which the stream server will listen.
	I1003 18:08:13.803043   31648 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1003 18:08:13.803054   31648 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1003 18:08:13.803065   31648 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1003 18:08:13.803072   31648 command_runner.go:130] > # stream_port = "0"
	I1003 18:08:13.803083   31648 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1003 18:08:13.803090   31648 command_runner.go:130] > # stream_enable_tls = false
	I1003 18:08:13.803102   31648 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1003 18:08:13.803114   31648 command_runner.go:130] > # stream_idle_timeout = ""
	I1003 18:08:13.803124   31648 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1003 18:08:13.803136   31648 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1003 18:08:13.803146   31648 command_runner.go:130] > # stream_tls_cert = ""
	I1003 18:08:13.803156   31648 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1003 18:08:13.803166   31648 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1003 18:08:13.803175   31648 command_runner.go:130] > # stream_tls_key = ""
	I1003 18:08:13.803185   31648 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1003 18:08:13.803197   31648 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1003 18:08:13.803202   31648 command_runner.go:130] > # automatically pick up the changes.
	I1003 18:08:13.803207   31648 command_runner.go:130] > # stream_tls_ca = ""
	I1003 18:08:13.803271   31648 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1003 18:08:13.803286   31648 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1003 18:08:13.803296   31648 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1003 18:08:13.803308   31648 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1003 18:08:13.803318   31648 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1003 18:08:13.803331   31648 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1003 18:08:13.803338   31648 command_runner.go:130] > [crio.runtime]
	I1003 18:08:13.803350   31648 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1003 18:08:13.803358   31648 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1003 18:08:13.803367   31648 command_runner.go:130] > # "nofile=1024:2048"
	I1003 18:08:13.803378   31648 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1003 18:08:13.803388   31648 command_runner.go:130] > # default_ulimits = [
	I1003 18:08:13.803393   31648 command_runner.go:130] > # ]
	I1003 18:08:13.803403   31648 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1003 18:08:13.803409   31648 command_runner.go:130] > # no_pivot = false
	I1003 18:08:13.803422   31648 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1003 18:08:13.803432   31648 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1003 18:08:13.803444   31648 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1003 18:08:13.803455   31648 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1003 18:08:13.803462   31648 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1003 18:08:13.803473   31648 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1003 18:08:13.803482   31648 command_runner.go:130] > # conmon = ""
	I1003 18:08:13.803489   31648 command_runner.go:130] > # Cgroup setting for conmon
	I1003 18:08:13.803504   31648 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1003 18:08:13.803513   31648 command_runner.go:130] > conmon_cgroup = "pod"
	I1003 18:08:13.803523   31648 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1003 18:08:13.803534   31648 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1003 18:08:13.803545   31648 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1003 18:08:13.803554   31648 command_runner.go:130] > # conmon_env = [
	I1003 18:08:13.803560   31648 command_runner.go:130] > # ]
	I1003 18:08:13.803573   31648 command_runner.go:130] > # Additional environment variables to set for all the
	I1003 18:08:13.803583   31648 command_runner.go:130] > # containers. These are overridden if set in the
	I1003 18:08:13.803595   31648 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1003 18:08:13.803603   31648 command_runner.go:130] > # default_env = [
	I1003 18:08:13.803611   31648 command_runner.go:130] > # ]
	I1003 18:08:13.803620   31648 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1003 18:08:13.803635   31648 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1003 18:08:13.803644   31648 command_runner.go:130] > # selinux = false
	I1003 18:08:13.803657   31648 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1003 18:08:13.803681   31648 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1003 18:08:13.803693   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.803703   31648 command_runner.go:130] > # seccomp_profile = ""
	I1003 18:08:13.803714   31648 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1003 18:08:13.803725   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.803735   31648 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1003 18:08:13.803746   31648 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1003 18:08:13.803760   31648 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1003 18:08:13.803772   31648 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1003 18:08:13.803785   31648 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1003 18:08:13.803796   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.803803   31648 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1003 18:08:13.803817   31648 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1003 18:08:13.803827   31648 command_runner.go:130] > # the cgroup blockio controller.
	I1003 18:08:13.803833   31648 command_runner.go:130] > # blockio_config_file = ""
	I1003 18:08:13.803847   31648 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1003 18:08:13.803856   31648 command_runner.go:130] > # blockio parameters.
	I1003 18:08:13.803862   31648 command_runner.go:130] > # blockio_reload = false
	I1003 18:08:13.803869   31648 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1003 18:08:13.803877   31648 command_runner.go:130] > # irqbalance daemon.
	I1003 18:08:13.803883   31648 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1003 18:08:13.803890   31648 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1003 18:08:13.803906   31648 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1003 18:08:13.803916   31648 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1003 18:08:13.803925   31648 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1003 18:08:13.803933   31648 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1003 18:08:13.803939   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.803951   31648 command_runner.go:130] > # rdt_config_file = ""
	I1003 18:08:13.803958   31648 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1003 18:08:13.803970   31648 command_runner.go:130] > # cgroup_manager = "systemd"
	I1003 18:08:13.803987   31648 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1003 18:08:13.803998   31648 command_runner.go:130] > # separate_pull_cgroup = ""
	I1003 18:08:13.804008   31648 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1003 18:08:13.804017   31648 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1003 18:08:13.804026   31648 command_runner.go:130] > # will be added.
	I1003 18:08:13.804035   31648 command_runner.go:130] > # default_capabilities = [
	I1003 18:08:13.804043   31648 command_runner.go:130] > # 	"CHOWN",
	I1003 18:08:13.804050   31648 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1003 18:08:13.804055   31648 command_runner.go:130] > # 	"FSETID",
	I1003 18:08:13.804066   31648 command_runner.go:130] > # 	"FOWNER",
	I1003 18:08:13.804071   31648 command_runner.go:130] > # 	"SETGID",
	I1003 18:08:13.804087   31648 command_runner.go:130] > # 	"SETUID",
	I1003 18:08:13.804093   31648 command_runner.go:130] > # 	"SETPCAP",
	I1003 18:08:13.804097   31648 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1003 18:08:13.804102   31648 command_runner.go:130] > # 	"KILL",
	I1003 18:08:13.804105   31648 command_runner.go:130] > # ]
	I1003 18:08:13.804112   31648 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1003 18:08:13.804121   31648 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1003 18:08:13.804125   31648 command_runner.go:130] > # add_inheritable_capabilities = false
	I1003 18:08:13.804133   31648 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1003 18:08:13.804138   31648 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1003 18:08:13.804143   31648 command_runner.go:130] > default_sysctls = [
	I1003 18:08:13.804147   31648 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1003 18:08:13.804150   31648 command_runner.go:130] > ]
	I1003 18:08:13.804157   31648 command_runner.go:130] > # List of devices on the host that a
	I1003 18:08:13.804163   31648 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1003 18:08:13.804169   31648 command_runner.go:130] > # allowed_devices = [
	I1003 18:08:13.804173   31648 command_runner.go:130] > # 	"/dev/fuse",
	I1003 18:08:13.804178   31648 command_runner.go:130] > # 	"/dev/net/tun",
	I1003 18:08:13.804181   31648 command_runner.go:130] > # ]
	I1003 18:08:13.804188   31648 command_runner.go:130] > # List of additional devices. specified as
	I1003 18:08:13.804194   31648 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1003 18:08:13.804201   31648 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1003 18:08:13.804207   31648 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1003 18:08:13.804212   31648 command_runner.go:130] > # additional_devices = [
	I1003 18:08:13.804215   31648 command_runner.go:130] > # ]
	I1003 18:08:13.804222   31648 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1003 18:08:13.804226   31648 command_runner.go:130] > # cdi_spec_dirs = [
	I1003 18:08:13.804231   31648 command_runner.go:130] > # 	"/etc/cdi",
	I1003 18:08:13.804235   31648 command_runner.go:130] > # 	"/var/run/cdi",
	I1003 18:08:13.804237   31648 command_runner.go:130] > # ]
	I1003 18:08:13.804243   31648 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1003 18:08:13.804251   31648 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1003 18:08:13.804254   31648 command_runner.go:130] > # Defaults to false.
	I1003 18:08:13.804261   31648 command_runner.go:130] > # device_ownership_from_security_context = false
	I1003 18:08:13.804268   31648 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1003 18:08:13.804275   31648 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1003 18:08:13.804279   31648 command_runner.go:130] > # hooks_dir = [
	I1003 18:08:13.804286   31648 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1003 18:08:13.804290   31648 command_runner.go:130] > # ]
	I1003 18:08:13.804297   31648 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1003 18:08:13.804303   31648 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1003 18:08:13.804309   31648 command_runner.go:130] > # its default mounts from the following two files:
	I1003 18:08:13.804312   31648 command_runner.go:130] > #
	I1003 18:08:13.804320   31648 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1003 18:08:13.804326   31648 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1003 18:08:13.804333   31648 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1003 18:08:13.804336   31648 command_runner.go:130] > #
	I1003 18:08:13.804342   31648 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1003 18:08:13.804349   31648 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1003 18:08:13.804356   31648 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1003 18:08:13.804363   31648 command_runner.go:130] > #      only add mounts it finds in this file.
	I1003 18:08:13.804366   31648 command_runner.go:130] > #
	I1003 18:08:13.804372   31648 command_runner.go:130] > # default_mounts_file = ""
	I1003 18:08:13.804376   31648 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1003 18:08:13.804384   31648 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1003 18:08:13.804388   31648 command_runner.go:130] > # pids_limit = -1
	I1003 18:08:13.804396   31648 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1003 18:08:13.804401   31648 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1003 18:08:13.804409   31648 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1003 18:08:13.804417   31648 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1003 18:08:13.804422   31648 command_runner.go:130] > # log_size_max = -1
	I1003 18:08:13.804429   31648 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1003 18:08:13.804435   31648 command_runner.go:130] > # log_to_journald = false
	I1003 18:08:13.804441   31648 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1003 18:08:13.804447   31648 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1003 18:08:13.804451   31648 command_runner.go:130] > # Path to directory for container attach sockets.
	I1003 18:08:13.804458   31648 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1003 18:08:13.804463   31648 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1003 18:08:13.804469   31648 command_runner.go:130] > # bind_mount_prefix = ""
	I1003 18:08:13.804473   31648 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1003 18:08:13.804479   31648 command_runner.go:130] > # read_only = false
	I1003 18:08:13.804486   31648 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1003 18:08:13.804494   31648 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1003 18:08:13.804497   31648 command_runner.go:130] > # live configuration reload.
	I1003 18:08:13.804501   31648 command_runner.go:130] > # log_level = "info"
	I1003 18:08:13.804508   31648 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1003 18:08:13.804513   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.804519   31648 command_runner.go:130] > # log_filter = ""
	I1003 18:08:13.804524   31648 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1003 18:08:13.804532   31648 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1003 18:08:13.804535   31648 command_runner.go:130] > # separated by comma.
	I1003 18:08:13.804544   31648 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1003 18:08:13.804551   31648 command_runner.go:130] > # uid_mappings = ""
	I1003 18:08:13.804557   31648 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1003 18:08:13.804564   31648 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1003 18:08:13.804569   31648 command_runner.go:130] > # separated by comma.
	I1003 18:08:13.804578   31648 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1003 18:08:13.804582   31648 command_runner.go:130] > # gid_mappings = ""
	I1003 18:08:13.804589   31648 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1003 18:08:13.804595   31648 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1003 18:08:13.804603   31648 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1003 18:08:13.804612   31648 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1003 18:08:13.804618   31648 command_runner.go:130] > # minimum_mappable_uid = -1
	I1003 18:08:13.804624   31648 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1003 18:08:13.804631   31648 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1003 18:08:13.804636   31648 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1003 18:08:13.804645   31648 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1003 18:08:13.804651   31648 command_runner.go:130] > # minimum_mappable_gid = -1
	I1003 18:08:13.804657   31648 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1003 18:08:13.804669   31648 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1003 18:08:13.804674   31648 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1003 18:08:13.804680   31648 command_runner.go:130] > # ctr_stop_timeout = 30
	I1003 18:08:13.804685   31648 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1003 18:08:13.804693   31648 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1003 18:08:13.804697   31648 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1003 18:08:13.804703   31648 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1003 18:08:13.804707   31648 command_runner.go:130] > # drop_infra_ctr = true
	I1003 18:08:13.804715   31648 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1003 18:08:13.804720   31648 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1003 18:08:13.804728   31648 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1003 18:08:13.804735   31648 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1003 18:08:13.804742   31648 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1003 18:08:13.804749   31648 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1003 18:08:13.804754   31648 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1003 18:08:13.804761   31648 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1003 18:08:13.804765   31648 command_runner.go:130] > # shared_cpuset = ""
	I1003 18:08:13.804773   31648 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1003 18:08:13.804777   31648 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1003 18:08:13.804783   31648 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1003 18:08:13.804789   31648 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1003 18:08:13.804795   31648 command_runner.go:130] > # pinns_path = ""
	I1003 18:08:13.804800   31648 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1003 18:08:13.804808   31648 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1003 18:08:13.804813   31648 command_runner.go:130] > # enable_criu_support = true
	I1003 18:08:13.804819   31648 command_runner.go:130] > # Enable/disable the generation of the container,
	I1003 18:08:13.804825   31648 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1003 18:08:13.804832   31648 command_runner.go:130] > # enable_pod_events = false
	I1003 18:08:13.804837   31648 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1003 18:08:13.804844   31648 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1003 18:08:13.804848   31648 command_runner.go:130] > # default_runtime = "crun"
	I1003 18:08:13.804855   31648 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1003 18:08:13.804862   31648 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1003 18:08:13.804874   31648 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1003 18:08:13.804881   31648 command_runner.go:130] > # creation as a file is not desired either.
	I1003 18:08:13.804889   31648 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1003 18:08:13.804896   31648 command_runner.go:130] > # the hostname is being managed dynamically.
	I1003 18:08:13.804900   31648 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1003 18:08:13.804905   31648 command_runner.go:130] > # ]
	I1003 18:08:13.804912   31648 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1003 18:08:13.804920   31648 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1003 18:08:13.804926   31648 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1003 18:08:13.804931   31648 command_runner.go:130] > # Each entry in the table should follow the format:
	I1003 18:08:13.804934   31648 command_runner.go:130] > #
	I1003 18:08:13.804941   31648 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1003 18:08:13.804945   31648 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1003 18:08:13.804952   31648 command_runner.go:130] > # runtime_type = "oci"
	I1003 18:08:13.804956   31648 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1003 18:08:13.804963   31648 command_runner.go:130] > # inherit_default_runtime = false
	I1003 18:08:13.804968   31648 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1003 18:08:13.804988   31648 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1003 18:08:13.804996   31648 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1003 18:08:13.805005   31648 command_runner.go:130] > # monitor_env = []
	I1003 18:08:13.805011   31648 command_runner.go:130] > # privileged_without_host_devices = false
	I1003 18:08:13.805017   31648 command_runner.go:130] > # allowed_annotations = []
	I1003 18:08:13.805022   31648 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1003 18:08:13.805028   31648 command_runner.go:130] > # no_sync_log = false
	I1003 18:08:13.805032   31648 command_runner.go:130] > # default_annotations = {}
	I1003 18:08:13.805038   31648 command_runner.go:130] > # stream_websockets = false
	I1003 18:08:13.805042   31648 command_runner.go:130] > # seccomp_profile = ""
	I1003 18:08:13.805062   31648 command_runner.go:130] > # Where:
	I1003 18:08:13.805069   31648 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1003 18:08:13.805075   31648 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1003 18:08:13.805081   31648 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1003 18:08:13.805089   31648 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1003 18:08:13.805092   31648 command_runner.go:130] > #   in $PATH.
	I1003 18:08:13.805100   31648 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1003 18:08:13.805105   31648 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1003 18:08:13.805112   31648 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1003 18:08:13.805115   31648 command_runner.go:130] > #   state.
	I1003 18:08:13.805121   31648 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1003 18:08:13.805128   31648 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1003 18:08:13.805133   31648 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1003 18:08:13.805141   31648 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1003 18:08:13.805146   31648 command_runner.go:130] > #   the values from the default runtime on load time.
	I1003 18:08:13.805153   31648 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1003 18:08:13.805158   31648 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1003 18:08:13.805165   31648 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1003 18:08:13.805177   31648 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1003 18:08:13.805183   31648 command_runner.go:130] > #   The currently recognized values are:
	I1003 18:08:13.805190   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1003 18:08:13.805199   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1003 18:08:13.805207   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1003 18:08:13.805214   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1003 18:08:13.805221   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1003 18:08:13.805229   31648 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1003 18:08:13.805235   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1003 18:08:13.805243   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1003 18:08:13.805251   31648 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1003 18:08:13.805257   31648 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1003 18:08:13.805265   31648 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1003 18:08:13.805273   31648 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1003 18:08:13.805278   31648 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1003 18:08:13.805285   31648 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1003 18:08:13.805291   31648 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1003 18:08:13.805300   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1003 18:08:13.805308   31648 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1003 18:08:13.805312   31648 command_runner.go:130] > #   deprecated option "conmon".
	I1003 18:08:13.805319   31648 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1003 18:08:13.805326   31648 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1003 18:08:13.805332   31648 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1003 18:08:13.805339   31648 command_runner.go:130] > #   should be moved to the container's cgroup
	I1003 18:08:13.805346   31648 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1003 18:08:13.805352   31648 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1003 18:08:13.805358   31648 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1003 18:08:13.805364   31648 command_runner.go:130] > #   conmon-rs by using:
	I1003 18:08:13.805370   31648 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1003 18:08:13.805379   31648 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1003 18:08:13.805388   31648 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1003 18:08:13.805395   31648 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1003 18:08:13.805401   31648 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1003 18:08:13.805415   31648 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1003 18:08:13.805423   31648 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1003 18:08:13.805430   31648 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1003 18:08:13.805437   31648 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1003 18:08:13.805449   31648 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1003 18:08:13.805455   31648 command_runner.go:130] > #   when a machine crash happens.
	I1003 18:08:13.805462   31648 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1003 18:08:13.805471   31648 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1003 18:08:13.805480   31648 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1003 18:08:13.805485   31648 command_runner.go:130] > #   seccomp profile for the runtime.
	I1003 18:08:13.805491   31648 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1003 18:08:13.805499   31648 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1003 18:08:13.805504   31648 command_runner.go:130] > #
	I1003 18:08:13.805508   31648 command_runner.go:130] > # Using the seccomp notifier feature:
	I1003 18:08:13.805513   31648 command_runner.go:130] > #
	I1003 18:08:13.805518   31648 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1003 18:08:13.805528   31648 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1003 18:08:13.805533   31648 command_runner.go:130] > #
	I1003 18:08:13.805539   31648 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1003 18:08:13.805547   31648 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1003 18:08:13.805549   31648 command_runner.go:130] > #
	I1003 18:08:13.805555   31648 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1003 18:08:13.805560   31648 command_runner.go:130] > # feature.
	I1003 18:08:13.805563   31648 command_runner.go:130] > #
	I1003 18:08:13.805568   31648 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1003 18:08:13.805576   31648 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1003 18:08:13.805582   31648 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1003 18:08:13.805589   31648 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1003 18:08:13.805595   31648 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1003 18:08:13.805600   31648 command_runner.go:130] > #
	I1003 18:08:13.805605   31648 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1003 18:08:13.805614   31648 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1003 18:08:13.805619   31648 command_runner.go:130] > #
	I1003 18:08:13.805625   31648 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1003 18:08:13.805632   31648 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1003 18:08:13.805635   31648 command_runner.go:130] > #
	I1003 18:08:13.805641   31648 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1003 18:08:13.805649   31648 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1003 18:08:13.805652   31648 command_runner.go:130] > # limitation.
	I1003 18:08:13.805656   31648 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1003 18:08:13.805666   31648 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1003 18:08:13.805671   31648 command_runner.go:130] > runtime_type = ""
	I1003 18:08:13.805675   31648 command_runner.go:130] > runtime_root = "/run/crun"
	I1003 18:08:13.805679   31648 command_runner.go:130] > inherit_default_runtime = false
	I1003 18:08:13.805683   31648 command_runner.go:130] > runtime_config_path = ""
	I1003 18:08:13.805689   31648 command_runner.go:130] > container_min_memory = ""
	I1003 18:08:13.805694   31648 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1003 18:08:13.805700   31648 command_runner.go:130] > monitor_cgroup = "pod"
	I1003 18:08:13.805704   31648 command_runner.go:130] > monitor_exec_cgroup = ""
	I1003 18:08:13.805710   31648 command_runner.go:130] > allowed_annotations = [
	I1003 18:08:13.805714   31648 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1003 18:08:13.805718   31648 command_runner.go:130] > ]
	I1003 18:08:13.805722   31648 command_runner.go:130] > privileged_without_host_devices = false
	I1003 18:08:13.805728   31648 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1003 18:08:13.805733   31648 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1003 18:08:13.805738   31648 command_runner.go:130] > runtime_type = ""
	I1003 18:08:13.805742   31648 command_runner.go:130] > runtime_root = "/run/runc"
	I1003 18:08:13.805748   31648 command_runner.go:130] > inherit_default_runtime = false
	I1003 18:08:13.805751   31648 command_runner.go:130] > runtime_config_path = ""
	I1003 18:08:13.805758   31648 command_runner.go:130] > container_min_memory = ""
	I1003 18:08:13.805762   31648 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1003 18:08:13.805767   31648 command_runner.go:130] > monitor_cgroup = "pod"
	I1003 18:08:13.805771   31648 command_runner.go:130] > monitor_exec_cgroup = ""
	I1003 18:08:13.805778   31648 command_runner.go:130] > privileged_without_host_devices = false
	I1003 18:08:13.805784   31648 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1003 18:08:13.805790   31648 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1003 18:08:13.805796   31648 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1003 18:08:13.805805   31648 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1003 18:08:13.805817   31648 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1003 18:08:13.805828   31648 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1003 18:08:13.805837   31648 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1003 18:08:13.805842   31648 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1003 18:08:13.805852   31648 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1003 18:08:13.805860   31648 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1003 18:08:13.805867   31648 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1003 18:08:13.805873   31648 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1003 18:08:13.805878   31648 command_runner.go:130] > # Example:
	I1003 18:08:13.805882   31648 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1003 18:08:13.805886   31648 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1003 18:08:13.805893   31648 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1003 18:08:13.805899   31648 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1003 18:08:13.805903   31648 command_runner.go:130] > # cpuset = "0-1"
	I1003 18:08:13.805906   31648 command_runner.go:130] > # cpushares = "5"
	I1003 18:08:13.805910   31648 command_runner.go:130] > # cpuquota = "1000"
	I1003 18:08:13.805919   31648 command_runner.go:130] > # cpuperiod = "100000"
	I1003 18:08:13.805924   31648 command_runner.go:130] > # cpulimit = "35"
	I1003 18:08:13.805933   31648 command_runner.go:130] > # Where:
	I1003 18:08:13.805940   31648 command_runner.go:130] > # The workload name is workload-type.
	I1003 18:08:13.805950   31648 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1003 18:08:13.805955   31648 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1003 18:08:13.805960   31648 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1003 18:08:13.805971   31648 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1003 18:08:13.805994   31648 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1003 18:08:13.806006   31648 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1003 18:08:13.806019   31648 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1003 18:08:13.806027   31648 command_runner.go:130] > # Default value is set to true
	I1003 18:08:13.806031   31648 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1003 18:08:13.806036   31648 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1003 18:08:13.806040   31648 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1003 18:08:13.806047   31648 command_runner.go:130] > # Default value is set to 'false'
	I1003 18:08:13.806052   31648 command_runner.go:130] > # disable_hostport_mapping = false
	I1003 18:08:13.806057   31648 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1003 18:08:13.806066   31648 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1003 18:08:13.806074   31648 command_runner.go:130] > # timezone = ""
	I1003 18:08:13.806085   31648 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1003 18:08:13.806093   31648 command_runner.go:130] > #
	I1003 18:08:13.806105   31648 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1003 18:08:13.806116   31648 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1003 18:08:13.806122   31648 command_runner.go:130] > [crio.image]
	I1003 18:08:13.806127   31648 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1003 18:08:13.806134   31648 command_runner.go:130] > # default_transport = "docker://"
	I1003 18:08:13.806139   31648 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1003 18:08:13.806147   31648 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1003 18:08:13.806154   31648 command_runner.go:130] > # global_auth_file = ""
	I1003 18:08:13.806159   31648 command_runner.go:130] > # The image used to instantiate infra containers.
	I1003 18:08:13.806165   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.806170   31648 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1003 18:08:13.806178   31648 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1003 18:08:13.806185   31648 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1003 18:08:13.806190   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.806196   31648 command_runner.go:130] > # pause_image_auth_file = ""
	I1003 18:08:13.806202   31648 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1003 18:08:13.806209   31648 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1003 18:08:13.806215   31648 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1003 18:08:13.806220   31648 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1003 18:08:13.806226   31648 command_runner.go:130] > # pause_command = "/pause"
	I1003 18:08:13.806231   31648 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1003 18:08:13.806239   31648 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1003 18:08:13.806244   31648 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1003 18:08:13.806252   31648 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1003 18:08:13.806257   31648 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1003 18:08:13.806264   31648 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1003 18:08:13.806268   31648 command_runner.go:130] > # pinned_images = [
	I1003 18:08:13.806271   31648 command_runner.go:130] > # ]
	I1003 18:08:13.806278   31648 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1003 18:08:13.806286   31648 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1003 18:08:13.806293   31648 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1003 18:08:13.806301   31648 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1003 18:08:13.806306   31648 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1003 18:08:13.806312   31648 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1003 18:08:13.806318   31648 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1003 18:08:13.806325   31648 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1003 18:08:13.806333   31648 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1003 18:08:13.806341   31648 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1003 18:08:13.806347   31648 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1003 18:08:13.806353   31648 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1003 18:08:13.806358   31648 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1003 18:08:13.806366   31648 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1003 18:08:13.806369   31648 command_runner.go:130] > # changing them here.
	I1003 18:08:13.806374   31648 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1003 18:08:13.806380   31648 command_runner.go:130] > # insecure_registries = [
	I1003 18:08:13.806383   31648 command_runner.go:130] > # ]
	I1003 18:08:13.806391   31648 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1003 18:08:13.806398   31648 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1003 18:08:13.806404   31648 command_runner.go:130] > # image_volumes = "mkdir"
	I1003 18:08:13.806409   31648 command_runner.go:130] > # Temporary directory to use for storing big files
	I1003 18:08:13.806415   31648 command_runner.go:130] > # big_files_temporary_dir = ""
	I1003 18:08:13.806420   31648 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1003 18:08:13.806429   31648 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1003 18:08:13.806435   31648 command_runner.go:130] > # auto_reload_registries = false
	I1003 18:08:13.806441   31648 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1003 18:08:13.806450   31648 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1003 18:08:13.806467   31648 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1003 18:08:13.806473   31648 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1003 18:08:13.806477   31648 command_runner.go:130] > # The mode of short name resolution.
	I1003 18:08:13.806484   31648 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1003 18:08:13.806492   31648 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1003 18:08:13.806499   31648 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1003 18:08:13.806503   31648 command_runner.go:130] > # short_name_mode = "enforcing"
	I1003 18:08:13.806511   31648 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1003 18:08:13.806518   31648 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1003 18:08:13.806523   31648 command_runner.go:130] > # oci_artifact_mount_support = true
	I1003 18:08:13.806530   31648 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1003 18:08:13.806535   31648 command_runner.go:130] > # CNI plugins.
	I1003 18:08:13.806541   31648 command_runner.go:130] > [crio.network]
	I1003 18:08:13.806546   31648 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1003 18:08:13.806553   31648 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1003 18:08:13.806557   31648 command_runner.go:130] > # cni_default_network = ""
	I1003 18:08:13.806562   31648 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1003 18:08:13.806568   31648 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1003 18:08:13.806573   31648 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1003 18:08:13.806580   31648 command_runner.go:130] > # plugin_dirs = [
	I1003 18:08:13.806584   31648 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1003 18:08:13.806589   31648 command_runner.go:130] > # ]
	I1003 18:08:13.806593   31648 command_runner.go:130] > # List of included pod metrics.
	I1003 18:08:13.806599   31648 command_runner.go:130] > # included_pod_metrics = [
	I1003 18:08:13.806603   31648 command_runner.go:130] > # ]
	I1003 18:08:13.806610   31648 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1003 18:08:13.806614   31648 command_runner.go:130] > [crio.metrics]
	I1003 18:08:13.806618   31648 command_runner.go:130] > # Globally enable or disable metrics support.
	I1003 18:08:13.806624   31648 command_runner.go:130] > # enable_metrics = false
	I1003 18:08:13.806629   31648 command_runner.go:130] > # Specify enabled metrics collectors.
	I1003 18:08:13.806635   31648 command_runner.go:130] > # Per default all metrics are enabled.
	I1003 18:08:13.806640   31648 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1003 18:08:13.806647   31648 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1003 18:08:13.806654   31648 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1003 18:08:13.806662   31648 command_runner.go:130] > # metrics_collectors = [
	I1003 18:08:13.806668   31648 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1003 18:08:13.806672   31648 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1003 18:08:13.806676   31648 command_runner.go:130] > # 	"containers_oom_total",
	I1003 18:08:13.806679   31648 command_runner.go:130] > # 	"processes_defunct",
	I1003 18:08:13.806682   31648 command_runner.go:130] > # 	"operations_total",
	I1003 18:08:13.806687   31648 command_runner.go:130] > # 	"operations_latency_seconds",
	I1003 18:08:13.806691   31648 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1003 18:08:13.806694   31648 command_runner.go:130] > # 	"operations_errors_total",
	I1003 18:08:13.806697   31648 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1003 18:08:13.806701   31648 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1003 18:08:13.806705   31648 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1003 18:08:13.806709   31648 command_runner.go:130] > # 	"image_pulls_success_total",
	I1003 18:08:13.806713   31648 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1003 18:08:13.806716   31648 command_runner.go:130] > # 	"containers_oom_count_total",
	I1003 18:08:13.806720   31648 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1003 18:08:13.806724   31648 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1003 18:08:13.806728   31648 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1003 18:08:13.806730   31648 command_runner.go:130] > # ]
	I1003 18:08:13.806736   31648 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1003 18:08:13.806739   31648 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1003 18:08:13.806744   31648 command_runner.go:130] > # The port on which the metrics server will listen.
	I1003 18:08:13.806747   31648 command_runner.go:130] > # metrics_port = 9090
	I1003 18:08:13.806751   31648 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1003 18:08:13.806755   31648 command_runner.go:130] > # metrics_socket = ""
	I1003 18:08:13.806759   31648 command_runner.go:130] > # The certificate for the secure metrics server.
	I1003 18:08:13.806765   31648 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1003 18:08:13.806770   31648 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1003 18:08:13.806774   31648 command_runner.go:130] > # certificate on any modification event.
	I1003 18:08:13.806780   31648 command_runner.go:130] > # metrics_cert = ""
	I1003 18:08:13.806785   31648 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1003 18:08:13.806791   31648 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1003 18:08:13.806795   31648 command_runner.go:130] > # metrics_key = ""
	I1003 18:08:13.806802   31648 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1003 18:08:13.806805   31648 command_runner.go:130] > [crio.tracing]
	I1003 18:08:13.806810   31648 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1003 18:08:13.806816   31648 command_runner.go:130] > # enable_tracing = false
	I1003 18:08:13.806821   31648 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1003 18:08:13.806827   31648 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1003 18:08:13.806834   31648 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1003 18:08:13.806841   31648 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1003 18:08:13.806845   31648 command_runner.go:130] > # CRI-O NRI configuration.
	I1003 18:08:13.806850   31648 command_runner.go:130] > [crio.nri]
	I1003 18:08:13.806854   31648 command_runner.go:130] > # Globally enable or disable NRI.
	I1003 18:08:13.806860   31648 command_runner.go:130] > # enable_nri = true
	I1003 18:08:13.806864   31648 command_runner.go:130] > # NRI socket to listen on.
	I1003 18:08:13.806870   31648 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1003 18:08:13.806874   31648 command_runner.go:130] > # NRI plugin directory to use.
	I1003 18:08:13.806880   31648 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1003 18:08:13.806885   31648 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1003 18:08:13.806891   31648 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1003 18:08:13.806896   31648 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1003 18:08:13.806926   31648 command_runner.go:130] > # nri_disable_connections = false
	I1003 18:08:13.806934   31648 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1003 18:08:13.806938   31648 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1003 18:08:13.806944   31648 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1003 18:08:13.806948   31648 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1003 18:08:13.806955   31648 command_runner.go:130] > # NRI default validator configuration.
	I1003 18:08:13.806961   31648 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1003 18:08:13.806968   31648 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1003 18:08:13.806972   31648 command_runner.go:130] > # can be restricted/rejected:
	I1003 18:08:13.806990   31648 command_runner.go:130] > # - OCI hook injection
	I1003 18:08:13.806998   31648 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1003 18:08:13.807007   31648 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1003 18:08:13.807014   31648 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1003 18:08:13.807024   31648 command_runner.go:130] > # - adjustment of linux namespaces
	I1003 18:08:13.807033   31648 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1003 18:08:13.807041   31648 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1003 18:08:13.807046   31648 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1003 18:08:13.807051   31648 command_runner.go:130] > #
	I1003 18:08:13.807055   31648 command_runner.go:130] > # [crio.nri.default_validator]
	I1003 18:08:13.807060   31648 command_runner.go:130] > # nri_enable_default_validator = false
	I1003 18:08:13.807067   31648 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1003 18:08:13.807072   31648 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1003 18:08:13.807079   31648 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1003 18:08:13.807083   31648 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1003 18:08:13.807088   31648 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1003 18:08:13.807094   31648 command_runner.go:130] > # nri_validator_required_plugins = [
	I1003 18:08:13.807097   31648 command_runner.go:130] > # ]
	I1003 18:08:13.807104   31648 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1003 18:08:13.807109   31648 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1003 18:08:13.807115   31648 command_runner.go:130] > [crio.stats]
	I1003 18:08:13.807121   31648 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1003 18:08:13.807128   31648 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1003 18:08:13.807132   31648 command_runner.go:130] > # stats_collection_period = 0
	I1003 18:08:13.807141   31648 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1003 18:08:13.807147   31648 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1003 18:08:13.807154   31648 command_runner.go:130] > # collection_period = 0
	I1003 18:08:13.807173   31648 command_runner.go:130] ! time="2025-10-03T18:08:13.78773481Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1003 18:08:13.807183   31648 command_runner.go:130] ! time="2025-10-03T18:08:13.787758775Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1003 18:08:13.807194   31648 command_runner.go:130] ! time="2025-10-03T18:08:13.787775454Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1003 18:08:13.807203   31648 command_runner.go:130] ! time="2025-10-03T18:08:13.78779273Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1003 18:08:13.807213   31648 command_runner.go:130] ! time="2025-10-03T18:08:13.7878475Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.807222   31648 command_runner.go:130] ! time="2025-10-03T18:08:13.788021357Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1003 18:08:13.807234   31648 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1003 18:08:13.807290   31648 cni.go:84] Creating CNI manager for ""
	I1003 18:08:13.807303   31648 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 18:08:13.807321   31648 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 18:08:13.807344   31648 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-889240 NodeName:functional-889240 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 18:08:13.807460   31648 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-889240"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:08:13.807513   31648 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 18:08:13.814815   31648 command_runner.go:130] > kubeadm
	I1003 18:08:13.814829   31648 command_runner.go:130] > kubectl
	I1003 18:08:13.814834   31648 command_runner.go:130] > kubelet
	I1003 18:08:13.815427   31648 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:08:13.815489   31648 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 18:08:13.822648   31648 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1003 18:08:13.834615   31648 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 18:08:13.846006   31648 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1003 18:08:13.857402   31648 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1003 18:08:13.860916   31648 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1003 18:08:13.860998   31648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:08:13.942536   31648 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:08:13.955386   31648 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240 for IP: 192.168.49.2
	I1003 18:08:13.955406   31648 certs.go:195] generating shared ca certs ...
	I1003 18:08:13.955424   31648 certs.go:227] acquiring lock for ca certs: {Name:mk92d1e8e469cb44d9924ff8abf5ecf0a8ce4e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:08:13.955571   31648 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key
	I1003 18:08:13.955642   31648 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key
	I1003 18:08:13.955660   31648 certs.go:257] generating profile certs ...
	I1003 18:08:13.955770   31648 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.key
	I1003 18:08:13.955933   31648 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.key.eb3f8f7c
	I1003 18:08:13.956034   31648 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.key
	I1003 18:08:13.956049   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 18:08:13.956072   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 18:08:13.956090   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 18:08:13.956107   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 18:08:13.956123   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 18:08:13.956140   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 18:08:13.956160   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 18:08:13.956185   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 18:08:13.956244   31648 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem (1338 bytes)
	W1003 18:08:13.956286   31648 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212_empty.pem, impossibly tiny 0 bytes
	I1003 18:08:13.956298   31648 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:08:13.956331   31648 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:08:13.956364   31648 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:08:13.956397   31648 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem (1675 bytes)
	I1003 18:08:13.956451   31648 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:08:13.956487   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem -> /usr/share/ca-certificates/12212.pem
	I1003 18:08:13.956507   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /usr/share/ca-certificates/122122.pem
	I1003 18:08:13.956528   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:08:13.957144   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:08:13.973779   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 18:08:13.990161   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:08:14.006157   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:08:14.022253   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 18:08:14.038198   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:08:14.054095   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:08:14.069959   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 18:08:14.085810   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem --> /usr/share/ca-certificates/12212.pem (1338 bytes)
	I1003 18:08:14.101812   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /usr/share/ca-certificates/122122.pem (1708 bytes)
	I1003 18:08:14.117716   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:08:14.134093   31648 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:08:14.145835   31648 ssh_runner.go:195] Run: openssl version
	I1003 18:08:14.151369   31648 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1003 18:08:14.151660   31648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122122.pem && ln -fs /usr/share/ca-certificates/122122.pem /etc/ssl/certs/122122.pem"
	I1003 18:08:14.160011   31648 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122122.pem
	I1003 18:08:14.163572   31648 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:08:14.163595   31648 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:08:14.163631   31648 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122122.pem
	I1003 18:08:14.196823   31648 command_runner.go:130] > 3ec20f2e
	I1003 18:08:14.197073   31648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122122.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:08:14.204835   31648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:08:14.212908   31648 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:08:14.216400   31648 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:08:14.216425   31648 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:08:14.216454   31648 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:08:14.249946   31648 command_runner.go:130] > b5213941
	I1003 18:08:14.250032   31648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:08:14.257940   31648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12212.pem && ln -fs /usr/share/ca-certificates/12212.pem /etc/ssl/certs/12212.pem"
	I1003 18:08:14.266302   31648 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12212.pem
	I1003 18:08:14.269939   31648 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:08:14.269964   31648 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:08:14.270013   31648 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12212.pem
	I1003 18:08:14.303247   31648 command_runner.go:130] > 51391683
	I1003 18:08:14.303479   31648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12212.pem /etc/ssl/certs/51391683.0"
	I1003 18:08:14.311263   31648 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:08:14.314772   31648 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:08:14.314798   31648 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1003 18:08:14.314807   31648 command_runner.go:130] > Device: 8,1	Inode: 579409      Links: 1
	I1003 18:08:14.314815   31648 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1003 18:08:14.314823   31648 command_runner.go:130] > Access: 2025-10-03 18:04:07.266428775 +0000
	I1003 18:08:14.314828   31648 command_runner.go:130] > Modify: 2025-10-03 18:00:02.305264452 +0000
	I1003 18:08:14.314842   31648 command_runner.go:130] > Change: 2025-10-03 18:00:02.305264452 +0000
	I1003 18:08:14.314851   31648 command_runner.go:130] >  Birth: 2025-10-03 18:00:02.305264452 +0000
	I1003 18:08:14.314920   31648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 18:08:14.349195   31648 command_runner.go:130] > Certificate will not expire
	I1003 18:08:14.349493   31648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 18:08:14.382820   31648 command_runner.go:130] > Certificate will not expire
	I1003 18:08:14.383063   31648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 18:08:14.416849   31648 command_runner.go:130] > Certificate will not expire
	I1003 18:08:14.416933   31648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 18:08:14.450508   31648 command_runner.go:130] > Certificate will not expire
	I1003 18:08:14.450572   31648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 18:08:14.483927   31648 command_runner.go:130] > Certificate will not expire
	I1003 18:08:14.484012   31648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 18:08:14.517658   31648 command_runner.go:130] > Certificate will not expire
	I1003 18:08:14.518008   31648 kubeadm.go:400] StartCluster: {Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:08:14.518097   31648 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:08:14.518174   31648 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:08:14.544326   31648 cri.go:89] found id: ""
	I1003 18:08:14.544381   31648 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:08:14.551440   31648 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1003 18:08:14.551457   31648 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1003 18:08:14.551463   31648 command_runner.go:130] > /var/lib/minikube/etcd:
	I1003 18:08:14.551962   31648 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1003 18:08:14.551995   31648 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1003 18:08:14.552044   31648 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 18:08:14.559024   31648 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:08:14.559104   31648 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-889240" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:08:14.559135   31648 kubeconfig.go:62] /home/jenkins/minikube-integration/21625-8669/kubeconfig needs updating (will repair): [kubeconfig missing "functional-889240" cluster setting kubeconfig missing "functional-889240" context setting]
	I1003 18:08:14.559426   31648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/kubeconfig: {Name:mk6b7939515483ba69c1f358a3a21494f4ead7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:08:14.562686   31648 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:08:14.562840   31648 kapi.go:59] client config for functional-889240: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.key", CAFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 18:08:14.563280   31648 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1003 18:08:14.563295   31648 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1003 18:08:14.563300   31648 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1003 18:08:14.563305   31648 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1003 18:08:14.563310   31648 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1003 18:08:14.563344   31648 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1003 18:08:14.563668   31648 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 18:08:14.571379   31648 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1003 18:08:14.571411   31648 kubeadm.go:601] duration metric: took 19.407047ms to restartPrimaryControlPlane
	I1003 18:08:14.571423   31648 kubeadm.go:402] duration metric: took 53.42211ms to StartCluster
	I1003 18:08:14.571440   31648 settings.go:142] acquiring lock: {Name:mk6bc950503a8f341b8aacc07a8bc72d5db3a25c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:08:14.571546   31648 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:08:14.572080   31648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/kubeconfig: {Name:mk6b7939515483ba69c1f358a3a21494f4ead7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:08:14.572261   31648 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 18:08:14.572328   31648 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 18:08:14.572418   31648 addons.go:69] Setting storage-provisioner=true in profile "functional-889240"
	I1003 18:08:14.572440   31648 addons.go:238] Setting addon storage-provisioner=true in "functional-889240"
	I1003 18:08:14.572443   31648 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:08:14.572454   31648 addons.go:69] Setting default-storageclass=true in profile "functional-889240"
	I1003 18:08:14.572472   31648 host.go:66] Checking if "functional-889240" exists ...
	I1003 18:08:14.572481   31648 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-889240"
	I1003 18:08:14.572708   31648 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
	I1003 18:08:14.572822   31648 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
	I1003 18:08:14.574934   31648 out.go:179] * Verifying Kubernetes components...
	I1003 18:08:14.575948   31648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:08:14.591352   31648 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:08:14.591562   31648 kapi.go:59] client config for functional-889240: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.key", CAFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 18:08:14.591895   31648 addons.go:238] Setting addon default-storageclass=true in "functional-889240"
	I1003 18:08:14.591927   31648 host.go:66] Checking if "functional-889240" exists ...
	I1003 18:08:14.592300   31648 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
	I1003 18:08:14.592939   31648 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 18:08:14.594638   31648 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:14.594655   31648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 18:08:14.594693   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:14.617423   31648 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:14.617446   31648 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 18:08:14.617507   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:14.620273   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:14.639039   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:14.672807   31648 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:08:14.684788   31648 node_ready.go:35] waiting up to 6m0s for node "functional-889240" to be "Ready" ...
	I1003 18:08:14.684921   31648 type.go:168] "Request Body" body=""
	I1003 18:08:14.685003   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:14.685252   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:14.730950   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:14.745066   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:14.786328   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:14.786378   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:14.786409   31648 retry.go:31] will retry after 270.951246ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:14.798186   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:14.798232   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:14.798258   31648 retry.go:31] will retry after 360.152106ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.057602   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:15.106841   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:15.109109   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.109138   31648 retry.go:31] will retry after 397.537911ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.159331   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:15.185817   31648 type.go:168] "Request Body" body=""
	I1003 18:08:15.185883   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:15.186219   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:15.210176   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:15.210221   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.210238   31648 retry.go:31] will retry after 493.012433ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.507675   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:15.555577   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:15.557666   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.557696   31648 retry.go:31] will retry after 440.122822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.685949   31648 type.go:168] "Request Body" body=""
	I1003 18:08:15.686038   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:15.686370   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:15.703496   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:15.753710   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:15.753758   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.753776   31648 retry.go:31] will retry after 795.152031ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.998073   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:16.047743   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:16.047782   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:16.047802   31648 retry.go:31] will retry after 705.62402ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:16.185279   31648 type.go:168] "Request Body" body=""
	I1003 18:08:16.185360   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:16.185691   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:16.549101   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:16.597196   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:16.599345   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:16.599377   31648 retry.go:31] will retry after 940.255489ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:16.685633   31648 type.go:168] "Request Body" body=""
	I1003 18:08:16.685701   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:16.685999   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:16.686058   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:16.754204   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:16.801452   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:16.803457   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:16.803489   31648 retry.go:31] will retry after 1.24021873s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:17.184970   31648 type.go:168] "Request Body" body=""
	I1003 18:08:17.185063   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:17.185424   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:17.539832   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:17.590758   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:17.590802   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:17.590823   31648 retry.go:31] will retry after 1.395425458s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:17.685012   31648 type.go:168] "Request Body" body=""
	I1003 18:08:17.685095   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:17.685454   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:18.043958   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:18.094735   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:18.094776   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:18.094793   31648 retry.go:31] will retry after 1.596032935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:18.185003   31648 type.go:168] "Request Body" body=""
	I1003 18:08:18.185100   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:18.185407   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:18.685017   31648 type.go:168] "Request Body" body=""
	I1003 18:08:18.685100   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:18.685393   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:18.986876   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:19.035593   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:19.038332   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:19.038363   31648 retry.go:31] will retry after 1.200373965s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:19.185671   31648 type.go:168] "Request Body" body=""
	I1003 18:08:19.185764   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:19.186105   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:19.186155   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:19.686009   31648 type.go:168] "Request Body" body=""
	I1003 18:08:19.686091   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:19.686423   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:19.691557   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:19.741190   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:19.743532   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:19.743567   31648 retry.go:31] will retry after 3.569328126s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:20.185118   31648 type.go:168] "Request Body" body=""
	I1003 18:08:20.185184   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:20.185523   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:20.239734   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:20.289529   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:20.291706   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:20.291741   31648 retry.go:31] will retry after 1.81500567s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:20.685251   31648 type.go:168] "Request Body" body=""
	I1003 18:08:20.685325   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:20.685635   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:21.185510   31648 type.go:168] "Request Body" body=""
	I1003 18:08:21.185583   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:21.185888   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:21.685727   31648 type.go:168] "Request Body" body=""
	I1003 18:08:21.685836   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:21.686208   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:21.686275   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:22.107768   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:22.158032   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:22.158081   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:22.158100   31648 retry.go:31] will retry after 3.676335527s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:22.185231   31648 type.go:168] "Request Body" body=""
	I1003 18:08:22.185319   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:22.185614   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:22.685370   31648 type.go:168] "Request Body" body=""
	I1003 18:08:22.685451   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:22.685806   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:23.185639   31648 type.go:168] "Request Body" body=""
	I1003 18:08:23.185743   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:23.186048   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:23.313354   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:23.364461   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:23.364519   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:23.364543   31648 retry.go:31] will retry after 3.926696561s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:23.685958   31648 type.go:168] "Request Body" body=""
	I1003 18:08:23.686044   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:23.686339   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:23.686396   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:24.186039   31648 type.go:168] "Request Body" body=""
	I1003 18:08:24.186135   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:24.186455   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:24.685152   31648 type.go:168] "Request Body" body=""
	I1003 18:08:24.685228   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:24.685576   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:25.185310   31648 type.go:168] "Request Body" body=""
	I1003 18:08:25.185375   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:25.185715   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:25.685392   31648 type.go:168] "Request Body" body=""
	I1003 18:08:25.685465   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:25.685774   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:25.835120   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:25.883846   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:25.886330   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:25.886360   31648 retry.go:31] will retry after 9.086319041s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:26.185864   31648 type.go:168] "Request Body" body=""
	I1003 18:08:26.185950   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:26.186312   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:26.186362   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:26.685071   31648 type.go:168] "Request Body" body=""
	I1003 18:08:26.685149   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:26.685486   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:27.185231   31648 type.go:168] "Request Body" body=""
	I1003 18:08:27.185303   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:27.185670   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:27.291951   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:27.344646   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:27.344705   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:27.344728   31648 retry.go:31] will retry after 9.233335187s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:27.685027   31648 type.go:168] "Request Body" body=""
	I1003 18:08:27.685131   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:27.685438   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:28.185051   31648 type.go:168] "Request Body" body=""
	I1003 18:08:28.185123   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:28.185416   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:28.685061   31648 type.go:168] "Request Body" body=""
	I1003 18:08:28.685136   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:28.685436   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:28.685488   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:29.185050   31648 type.go:168] "Request Body" body=""
	I1003 18:08:29.185116   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:29.185410   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:29.685011   31648 type.go:168] "Request Body" body=""
	I1003 18:08:29.685107   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:29.685414   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:30.185028   31648 type.go:168] "Request Body" body=""
	I1003 18:08:30.185114   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:30.185401   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:30.685020   31648 type.go:168] "Request Body" body=""
	I1003 18:08:30.685097   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:30.685428   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:31.185273   31648 type.go:168] "Request Body" body=""
	I1003 18:08:31.185345   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:31.185680   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:31.185733   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:31.685419   31648 type.go:168] "Request Body" body=""
	I1003 18:08:31.685507   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:31.685800   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:32.185743   31648 type.go:168] "Request Body" body=""
	I1003 18:08:32.185852   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:32.186217   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:32.684952   31648 type.go:168] "Request Body" body=""
	I1003 18:08:32.685038   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:32.685332   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:33.185084   31648 type.go:168] "Request Body" body=""
	I1003 18:08:33.185176   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:33.185536   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:33.685288   31648 type.go:168] "Request Body" body=""
	I1003 18:08:33.685369   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:33.685664   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:33.685725   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:34.185445   31648 type.go:168] "Request Body" body=""
	I1003 18:08:34.185522   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:34.185879   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:34.685599   31648 type.go:168] "Request Body" body=""
	I1003 18:08:34.685698   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:34.686052   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:34.973491   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:35.025995   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:35.026042   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:35.026060   31648 retry.go:31] will retry after 13.835197481s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:35.185336   31648 type.go:168] "Request Body" body=""
	I1003 18:08:35.185419   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:35.185713   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:35.685344   31648 type.go:168] "Request Body" body=""
	I1003 18:08:35.685434   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:35.685770   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:35.685857   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:36.185648   31648 type.go:168] "Request Body" body=""
	I1003 18:08:36.185719   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:36.186013   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:36.578491   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:36.629045   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:36.629094   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:36.629123   31648 retry.go:31] will retry after 7.439097167s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:36.685279   31648 type.go:168] "Request Body" body=""
	I1003 18:08:36.685356   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:36.685671   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:37.185440   31648 type.go:168] "Request Body" body=""
	I1003 18:08:37.185503   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:37.185805   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:37.685609   31648 type.go:168] "Request Body" body=""
	I1003 18:08:37.685705   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:37.686055   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:37.686118   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:38.185875   31648 type.go:168] "Request Body" body=""
	I1003 18:08:38.185966   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:38.186273   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:38.685047   31648 type.go:168] "Request Body" body=""
	I1003 18:08:38.685111   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:38.685422   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:39.185132   31648 type.go:168] "Request Body" body=""
	I1003 18:08:39.185219   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:39.185524   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:39.685244   31648 type.go:168] "Request Body" body=""
	I1003 18:08:39.685308   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:39.685620   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:40.185346   31648 type.go:168] "Request Body" body=""
	I1003 18:08:40.185409   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:40.185703   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:40.185782   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:40.685452   31648 type.go:168] "Request Body" body=""
	I1003 18:08:40.685560   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:40.685889   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:41.185504   31648 type.go:168] "Request Body" body=""
	I1003 18:08:41.185583   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:41.185889   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:41.685695   31648 type.go:168] "Request Body" body=""
	I1003 18:08:41.685767   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:41.686090   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:42.185782   31648 type.go:168] "Request Body" body=""
	I1003 18:08:42.185862   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:42.186224   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:42.186281   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:42.685859   31648 type.go:168] "Request Body" body=""
	I1003 18:08:42.685952   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:42.686271   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:43.185893   31648 type.go:168] "Request Body" body=""
	I1003 18:08:43.185999   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:43.186296   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:43.685944   31648 type.go:168] "Request Body" body=""
	I1003 18:08:43.686017   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:43.686309   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:44.068807   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:44.118932   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:44.118993   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:44.119018   31648 retry.go:31] will retry after 11.649333138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:44.185207   31648 type.go:168] "Request Body" body=""
	I1003 18:08:44.185271   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:44.185562   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:44.685354   31648 type.go:168] "Request Body" body=""
	I1003 18:08:44.685421   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:44.685759   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:44.685811   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:45.185341   31648 type.go:168] "Request Body" body=""
	I1003 18:08:45.185433   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:45.185739   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:45.685457   31648 type.go:168] "Request Body" body=""
	I1003 18:08:45.685529   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:45.685878   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:46.185715   31648 type.go:168] "Request Body" body=""
	I1003 18:08:46.185814   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:46.186178   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:46.685956   31648 type.go:168] "Request Body" body=""
	I1003 18:08:46.686040   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:46.686342   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:46.686417   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:47.185108   31648 type.go:168] "Request Body" body=""
	I1003 18:08:47.185173   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:47.185454   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:47.685185   31648 type.go:168] "Request Body" body=""
	I1003 18:08:47.685263   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:47.685629   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:48.185337   31648 type.go:168] "Request Body" body=""
	I1003 18:08:48.185401   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:48.185716   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:48.685423   31648 type.go:168] "Request Body" body=""
	I1003 18:08:48.685491   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:48.685791   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:48.862137   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:48.911551   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:48.911612   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:48.911635   31648 retry.go:31] will retry after 10.230842759s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:49.184986   31648 type.go:168] "Request Body" body=""
	I1003 18:08:49.185056   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:49.185386   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:49.185450   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:49.685132   31648 type.go:168] "Request Body" body=""
	I1003 18:08:49.685197   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:49.685528   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:50.185253   31648 type.go:168] "Request Body" body=""
	I1003 18:08:50.185319   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:50.185649   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:50.685352   31648 type.go:168] "Request Body" body=""
	I1003 18:08:50.685456   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:50.685777   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:51.185614   31648 type.go:168] "Request Body" body=""
	I1003 18:08:51.185727   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:51.186089   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:51.186142   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:51.685865   31648 type.go:168] "Request Body" body=""
	I1003 18:08:51.685970   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:51.686292   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:52.185039   31648 type.go:168] "Request Body" body=""
	I1003 18:08:52.185145   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:52.185488   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:52.685238   31648 type.go:168] "Request Body" body=""
	I1003 18:08:52.685302   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:52.685617   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:53.185313   31648 type.go:168] "Request Body" body=""
	I1003 18:08:53.185377   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:53.185697   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:53.685459   31648 type.go:168] "Request Body" body=""
	I1003 18:08:53.685528   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:53.685880   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:53.685930   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:54.185736   31648 type.go:168] "Request Body" body=""
	I1003 18:08:54.185800   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:54.186122   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:54.685875   31648 type.go:168] "Request Body" body=""
	I1003 18:08:54.685940   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:54.686284   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:55.185038   31648 type.go:168] "Request Body" body=""
	I1003 18:08:55.185103   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:55.185420   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:55.685122   31648 type.go:168] "Request Body" body=""
	I1003 18:08:55.685213   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:55.685505   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:55.768789   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:55.820187   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:55.820247   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:55.820271   31648 retry.go:31] will retry after 17.817355848s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:56.185846   31648 type.go:168] "Request Body" body=""
	I1003 18:08:56.185913   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:56.186233   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:56.186374   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:56.685948   31648 type.go:168] "Request Body" body=""
	I1003 18:08:56.686081   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:56.686423   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:57.185019   31648 type.go:168] "Request Body" body=""
	I1003 18:08:57.185105   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:57.185399   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:57.684931   31648 type.go:168] "Request Body" body=""
	I1003 18:08:57.685041   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:57.685319   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:58.185047   31648 type.go:168] "Request Body" body=""
	I1003 18:08:58.185109   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:58.185402   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:58.685125   31648 type.go:168] "Request Body" body=""
	I1003 18:08:58.685211   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:58.685543   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:58.685617   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:59.143069   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:59.185821   31648 type.go:168] "Request Body" body=""
	I1003 18:08:59.185917   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:59.186232   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:59.193474   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:59.193510   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:59.193527   31648 retry.go:31] will retry after 25.255183485s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:59.685108   31648 type.go:168] "Request Body" body=""
	I1003 18:08:59.685198   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:59.685504   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:00.185069   31648 type.go:168] "Request Body" body=""
	I1003 18:09:00.185163   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:00.185465   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:00.685045   31648 type.go:168] "Request Body" body=""
	I1003 18:09:00.685107   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:00.685401   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:01.185250   31648 type.go:168] "Request Body" body=""
	I1003 18:09:01.185349   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:01.185688   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:01.185754   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:01.685310   31648 type.go:168] "Request Body" body=""
	I1003 18:09:01.685402   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:01.685720   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:02.185253   31648 type.go:168] "Request Body" body=""
	I1003 18:09:02.185346   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:02.185664   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:02.685182   31648 type.go:168] "Request Body" body=""
	I1003 18:09:02.685247   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:02.685567   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:03.185121   31648 type.go:168] "Request Body" body=""
	I1003 18:09:03.185184   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:03.185472   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:03.685069   31648 type.go:168] "Request Body" body=""
	I1003 18:09:03.685140   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:03.685473   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:03.685548   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:04.185138   31648 type.go:168] "Request Body" body=""
	I1003 18:09:04.185208   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:04.185511   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:04.685397   31648 type.go:168] "Request Body" body=""
	I1003 18:09:04.685498   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:04.685815   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:05.185368   31648 type.go:168] "Request Body" body=""
	I1003 18:09:05.185430   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:05.185752   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:05.685306   31648 type.go:168] "Request Body" body=""
	I1003 18:09:05.685399   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:05.685722   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:05.685773   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:06.185506   31648 type.go:168] "Request Body" body=""
	I1003 18:09:06.185596   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:06.185889   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:06.685509   31648 type.go:168] "Request Body" body=""
	I1003 18:09:06.685600   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:06.685920   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:07.185528   31648 type.go:168] "Request Body" body=""
	I1003 18:09:07.185591   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:07.185930   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:07.685592   31648 type.go:168] "Request Body" body=""
	I1003 18:09:07.685666   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:07.686000   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:07.686050   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:08.185578   31648 type.go:168] "Request Body" body=""
	I1003 18:09:08.185676   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:08.185969   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:08.685655   31648 type.go:168] "Request Body" body=""
	I1003 18:09:08.685728   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:08.686124   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:09.185744   31648 type.go:168] "Request Body" body=""
	I1003 18:09:09.185811   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:09.186109   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:09.685870   31648 type.go:168] "Request Body" body=""
	I1003 18:09:09.685938   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:09.686249   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:09.686300   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:10.185899   31648 type.go:168] "Request Body" body=""
	I1003 18:09:10.185995   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:10.186296   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:10.684943   31648 type.go:168] "Request Body" body=""
	I1003 18:09:10.685033   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:10.685323   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:11.185004   31648 type.go:168] "Request Body" body=""
	I1003 18:09:11.185066   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:11.185370   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:11.684959   31648 type.go:168] "Request Body" body=""
	I1003 18:09:11.685050   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:11.685368   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:12.184955   31648 type.go:168] "Request Body" body=""
	I1003 18:09:12.185063   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:12.185367   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:12.185420   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:12.684941   31648 type.go:168] "Request Body" body=""
	I1003 18:09:12.685054   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:12.685356   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:13.185955   31648 type.go:168] "Request Body" body=""
	I1003 18:09:13.186031   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:13.186349   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:13.637912   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:09:13.685539   31648 type.go:168] "Request Body" body=""
	I1003 18:09:13.685624   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:13.685989   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:13.686249   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:09:13.688536   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:09:13.688567   31648 retry.go:31] will retry after 16.395640375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:09:14.185086   31648 type.go:168] "Request Body" body=""
	I1003 18:09:14.185158   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:14.185474   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:14.185528   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:14.685417   31648 type.go:168] "Request Body" body=""
	I1003 18:09:14.685504   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:14.685861   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:15.185730   31648 type.go:168] "Request Body" body=""
	I1003 18:09:15.185803   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:15.186135   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:15.685950   31648 type.go:168] "Request Body" body=""
	I1003 18:09:15.686047   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:15.686390   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:16.185313   31648 type.go:168] "Request Body" body=""
	I1003 18:09:16.185381   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:16.185711   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:16.185784   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:16.685449   31648 type.go:168] "Request Body" body=""
	I1003 18:09:16.685527   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:16.685889   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:17.185723   31648 type.go:168] "Request Body" body=""
	I1003 18:09:17.185815   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:17.186154   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:17.685963   31648 type.go:168] "Request Body" body=""
	I1003 18:09:17.686103   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:17.686430   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:18.185163   31648 type.go:168] "Request Body" body=""
	I1003 18:09:18.185228   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:18.185536   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:18.685287   31648 type.go:168] "Request Body" body=""
	I1003 18:09:18.685398   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:18.685756   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:18.685818   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:19.185602   31648 type.go:168] "Request Body" body=""
	I1003 18:09:19.185674   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:19.186025   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:19.685824   31648 type.go:168] "Request Body" body=""
	I1003 18:09:19.685902   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:19.686264   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:20.185104   31648 type.go:168] "Request Body" body=""
	I1003 18:09:20.185178   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:20.185565   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:20.685343   31648 type.go:168] "Request Body" body=""
	I1003 18:09:20.685448   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:20.685814   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:20.685865   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:21.185641   31648 type.go:168] "Request Body" body=""
	I1003 18:09:21.185717   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:21.186091   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:21.685899   31648 type.go:168] "Request Body" body=""
	I1003 18:09:21.686019   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:21.686347   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:22.185083   31648 type.go:168] "Request Body" body=""
	I1003 18:09:22.185175   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:22.185486   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:22.685245   31648 type.go:168] "Request Body" body=""
	I1003 18:09:22.685334   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:22.685730   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:23.185497   31648 type.go:168] "Request Body" body=""
	I1003 18:09:23.185562   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:23.185880   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:23.185935   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:23.685744   31648 type.go:168] "Request Body" body=""
	I1003 18:09:23.685811   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:23.686201   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:24.184964   31648 type.go:168] "Request Body" body=""
	I1003 18:09:24.185078   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:24.185397   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:24.449821   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:09:24.497529   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:09:24.499857   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:09:24.499886   31648 retry.go:31] will retry after 48.383287224s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:09:24.685468   31648 type.go:168] "Request Body" body=""
	I1003 18:09:24.685534   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:24.685867   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:25.185654   31648 type.go:168] "Request Body" body=""
	I1003 18:09:25.185748   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:25.186075   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:25.186127   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:25.685902   31648 type.go:168] "Request Body" body=""
	I1003 18:09:25.685999   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:25.686299   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:26.185018   31648 type.go:168] "Request Body" body=""
	I1003 18:09:26.185106   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:26.185414   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:26.685136   31648 type.go:168] "Request Body" body=""
	I1003 18:09:26.685216   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:26.685515   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:27.185253   31648 type.go:168] "Request Body" body=""
	I1003 18:09:27.185318   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:27.185650   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:27.685386   31648 type.go:168] "Request Body" body=""
	I1003 18:09:27.685451   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:27.685791   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:27.685845   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:28.185583   31648 type.go:168] "Request Body" body=""
	I1003 18:09:28.185675   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:28.186015   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:28.685836   31648 type.go:168] "Request Body" body=""
	I1003 18:09:28.685940   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:28.686317   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:29.185053   31648 type.go:168] "Request Body" body=""
	I1003 18:09:29.185118   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:29.185421   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:29.685145   31648 type.go:168] "Request Body" body=""
	I1003 18:09:29.685239   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:29.685545   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:30.085101   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:09:30.133826   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:09:30.136048   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:09:30.136077   31648 retry.go:31] will retry after 44.319890963s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:09:30.185379   31648 type.go:168] "Request Body" body=""
	I1003 18:09:30.185467   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:30.185752   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:30.185824   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:30.685605   31648 type.go:168] "Request Body" body=""
	I1003 18:09:30.685677   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:30.686026   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:31.185741   31648 type.go:168] "Request Body" body=""
	I1003 18:09:31.185821   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:31.186131   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:31.685990   31648 type.go:168] "Request Body" body=""
	I1003 18:09:31.686102   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:31.686418   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:32.185174   31648 type.go:168] "Request Body" body=""
	I1003 18:09:32.185268   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:32.185574   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:32.685346   31648 type.go:168] "Request Body" body=""
	I1003 18:09:32.685414   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:32.685749   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:32.685798   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:33.185523   31648 type.go:168] "Request Body" body=""
	I1003 18:09:33.185630   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:33.185973   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:33.685847   31648 type.go:168] "Request Body" body=""
	I1003 18:09:33.685917   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:33.686290   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:34.185044   31648 type.go:168] "Request Body" body=""
	I1003 18:09:34.185158   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:34.185479   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:34.685329   31648 type.go:168] "Request Body" body=""
	I1003 18:09:34.685395   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:34.685778   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:34.685850   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:35.185617   31648 type.go:168] "Request Body" body=""
	I1003 18:09:35.185711   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:35.186046   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:35.685845   31648 type.go:168] "Request Body" body=""
	I1003 18:09:35.685931   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:35.686261   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:36.184952   31648 type.go:168] "Request Body" body=""
	I1003 18:09:36.185036   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:36.185378   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:36.685083   31648 type.go:168] "Request Body" body=""
	I1003 18:09:36.685158   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:36.685526   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:37.185252   31648 type.go:168] "Request Body" body=""
	I1003 18:09:37.185333   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:37.185680   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:37.185740   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:37.685420   31648 type.go:168] "Request Body" body=""
	I1003 18:09:37.685494   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:37.685856   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:38.185680   31648 type.go:168] "Request Body" body=""
	I1003 18:09:38.185779   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:38.186105   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:38.685935   31648 type.go:168] "Request Body" body=""
	I1003 18:09:38.686035   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:38.686351   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:39.185118   31648 type.go:168] "Request Body" body=""
	I1003 18:09:39.185189   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:39.185487   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:39.685188   31648 type.go:168] "Request Body" body=""
	I1003 18:09:39.685265   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:39.685570   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:39.685631   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:40.185362   31648 type.go:168] "Request Body" body=""
	I1003 18:09:40.185457   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:40.185802   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:40.685609   31648 type.go:168] "Request Body" body=""
	I1003 18:09:40.685713   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:40.686101   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:41.186030   31648 type.go:168] "Request Body" body=""
	I1003 18:09:41.186101   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:41.186433   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:41.685075   31648 type.go:168] "Request Body" body=""
	I1003 18:09:41.685142   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:41.685469   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:42.185193   31648 type.go:168] "Request Body" body=""
	I1003 18:09:42.185257   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:42.185565   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:42.185630   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:42.685077   31648 type.go:168] "Request Body" body=""
	I1003 18:09:42.685172   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:42.685483   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:43.185219   31648 type.go:168] "Request Body" body=""
	I1003 18:09:43.185289   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:43.185605   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:43.685108   31648 type.go:168] "Request Body" body=""
	I1003 18:09:43.685175   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:43.685496   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:44.185214   31648 type.go:168] "Request Body" body=""
	I1003 18:09:44.185314   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:44.185626   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:44.185696   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:44.685443   31648 type.go:168] "Request Body" body=""
	I1003 18:09:44.685535   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:44.685860   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:45.185669   31648 type.go:168] "Request Body" body=""
	I1003 18:09:45.185734   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:45.186050   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:45.685869   31648 type.go:168] "Request Body" body=""
	I1003 18:09:45.685940   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:45.686258   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:46.184960   31648 type.go:168] "Request Body" body=""
	I1003 18:09:46.185084   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:46.185423   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:46.685149   31648 type.go:168] "Request Body" body=""
	I1003 18:09:46.685219   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:46.685543   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:46.685599   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:47.185302   31648 type.go:168] "Request Body" body=""
	I1003 18:09:47.185370   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:47.185710   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:47.685432   31648 type.go:168] "Request Body" body=""
	I1003 18:09:47.685496   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:47.685808   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:48.185599   31648 type.go:168] "Request Body" body=""
	I1003 18:09:48.185663   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:48.186043   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:48.685839   31648 type.go:168] "Request Body" body=""
	I1003 18:09:48.685931   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:48.686255   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:48.686305   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:49.185022   31648 type.go:168] "Request Body" body=""
	I1003 18:09:49.185091   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:49.185409   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:49.685097   31648 type.go:168] "Request Body" body=""
	I1003 18:09:49.685189   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:49.685510   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:50.185245   31648 type.go:168] "Request Body" body=""
	I1003 18:09:50.185317   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:50.185675   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:50.685396   31648 type.go:168] "Request Body" body=""
	I1003 18:09:50.685460   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:50.685814   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:51.185668   31648 type.go:168] "Request Body" body=""
	I1003 18:09:51.185757   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:51.186064   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:51.186116   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:51.685866   31648 type.go:168] "Request Body" body=""
	I1003 18:09:51.685934   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:51.686277   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:52.185003   31648 type.go:168] "Request Body" body=""
	I1003 18:09:52.185067   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:52.185368   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:52.685121   31648 type.go:168] "Request Body" body=""
	I1003 18:09:52.685219   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:52.685573   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:53.185280   31648 type.go:168] "Request Body" body=""
	I1003 18:09:53.185339   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:53.185633   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:53.685331   31648 type.go:168] "Request Body" body=""
	I1003 18:09:53.685395   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:53.685759   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:53.685836   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:54.185620   31648 type.go:168] "Request Body" body=""
	I1003 18:09:54.185691   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:54.186007   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:54.685714   31648 type.go:168] "Request Body" body=""
	I1003 18:09:54.685778   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:54.686135   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:55.185951   31648 type.go:168] "Request Body" body=""
	I1003 18:09:55.186058   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:55.186387   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:55.685101   31648 type.go:168] "Request Body" body=""
	I1003 18:09:55.685193   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:55.685564   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:56.185405   31648 type.go:168] "Request Body" body=""
	I1003 18:09:56.185491   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:56.185823   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:56.185874   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:56.685614   31648 type.go:168] "Request Body" body=""
	I1003 18:09:56.685702   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:56.686026   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:57.185904   31648 type.go:168] "Request Body" body=""
	I1003 18:09:57.186000   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:57.186336   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:57.685087   31648 type.go:168] "Request Body" body=""
	I1003 18:09:57.685160   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:57.685447   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:58.185160   31648 type.go:168] "Request Body" body=""
	I1003 18:09:58.185246   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:58.185558   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:58.685303   31648 type.go:168] "Request Body" body=""
	I1003 18:09:58.685365   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:58.685671   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:58.685755   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:59.185446   31648 type.go:168] "Request Body" body=""
	I1003 18:09:59.185545   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:59.185914   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:59.685737   31648 type.go:168] "Request Body" body=""
	I1003 18:09:59.685801   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:59.686146   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:00.185972   31648 type.go:168] "Request Body" body=""
	I1003 18:10:00.186075   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:00.186364   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:00.685077   31648 type.go:168] "Request Body" body=""
	I1003 18:10:00.685166   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:00.685464   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:01.185382   31648 type.go:168] "Request Body" body=""
	I1003 18:10:01.185446   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:01.185778   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:01.185830   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:01.685606   31648 type.go:168] "Request Body" body=""
	I1003 18:10:01.685677   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:01.686032   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:02.185907   31648 type.go:168] "Request Body" body=""
	I1003 18:10:02.186020   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:02.186378   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:02.685091   31648 type.go:168] "Request Body" body=""
	I1003 18:10:02.685152   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:02.685445   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:03.185142   31648 type.go:168] "Request Body" body=""
	I1003 18:10:03.185225   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:03.185561   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:03.685236   31648 type.go:168] "Request Body" body=""
	I1003 18:10:03.685339   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:03.685634   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:03.685696   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:04.185365   31648 type.go:168] "Request Body" body=""
	I1003 18:10:04.185433   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:04.185727   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:04.685562   31648 type.go:168] "Request Body" body=""
	I1003 18:10:04.685630   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:04.686027   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:05.185808   31648 type.go:168] "Request Body" body=""
	I1003 18:10:05.185875   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:05.186210   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:05.686012   31648 type.go:168] "Request Body" body=""
	I1003 18:10:05.686094   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:05.686420   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:05.686513   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:06.185220   31648 type.go:168] "Request Body" body=""
	I1003 18:10:06.185317   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:06.185670   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:06.685370   31648 type.go:168] "Request Body" body=""
	I1003 18:10:06.685434   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:06.685727   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:07.185434   31648 type.go:168] "Request Body" body=""
	I1003 18:10:07.185512   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:07.185878   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:07.685679   31648 type.go:168] "Request Body" body=""
	I1003 18:10:07.685748   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:07.686309   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:08.185067   31648 type.go:168] "Request Body" body=""
	I1003 18:10:08.185137   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:08.185459   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:08.185516   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:08.685191   31648 type.go:168] "Request Body" body=""
	I1003 18:10:08.685261   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:08.685582   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:09.185329   31648 type.go:168] "Request Body" body=""
	I1003 18:10:09.185397   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:09.185705   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:09.685441   31648 type.go:168] "Request Body" body=""
	I1003 18:10:09.685504   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:09.685840   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:10.185620   31648 type.go:168] "Request Body" body=""
	I1003 18:10:10.185689   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:10.186037   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:10.186087   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:10.685838   31648 type.go:168] "Request Body" body=""
	I1003 18:10:10.685914   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:10.686280   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:11.184954   31648 type.go:168] "Request Body" body=""
	I1003 18:10:11.185044   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:11.185353   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:11.685099   31648 type.go:168] "Request Body" body=""
	I1003 18:10:11.685168   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:11.685473   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:12.185192   31648 type.go:168] "Request Body" body=""
	I1003 18:10:12.185259   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:12.185564   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:12.685315   31648 type.go:168] "Request Body" body=""
	I1003 18:10:12.685386   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:12.685819   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:12.685875   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:12.884184   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:10:12.932382   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:10:12.934859   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:10:12.935018   31648 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1003 18:10:13.185242   31648 type.go:168] "Request Body" body=""
	I1003 18:10:13.185310   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:13.185617   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:13.685328   31648 type.go:168] "Request Body" body=""
	I1003 18:10:13.685430   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:13.685917   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:14.185730   31648 type.go:168] "Request Body" body=""
	I1003 18:10:14.185796   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:14.186122   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:14.456560   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:10:14.507486   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:10:14.509939   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:10:14.510064   31648 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1003 18:10:14.512677   31648 out.go:179] * Enabled addons: 
	I1003 18:10:14.514281   31648 addons.go:514] duration metric: took 1m59.941954445s for enable addons: enabled=[]
	I1003 18:10:14.685449   31648 type.go:168] "Request Body" body=""
	I1003 18:10:14.685516   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:14.685857   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:14.685919   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:15.185675   31648 type.go:168] "Request Body" body=""
	I1003 18:10:15.185738   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:15.186060   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:15.685871   31648 type.go:168] "Request Body" body=""
	I1003 18:10:15.685938   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:15.686263   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:16.184928   31648 type.go:168] "Request Body" body=""
	I1003 18:10:16.185033   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:16.185365   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:16.685082   31648 type.go:168] "Request Body" body=""
	I1003 18:10:16.685144   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:16.685447   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:17.185125   31648 type.go:168] "Request Body" body=""
	I1003 18:10:17.185202   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:17.185514   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:17.185563   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:17.685251   31648 type.go:168] "Request Body" body=""
	I1003 18:10:17.685320   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:17.685625   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:18.185367   31648 type.go:168] "Request Body" body=""
	I1003 18:10:18.185448   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:18.185805   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:18.685631   31648 type.go:168] "Request Body" body=""
	I1003 18:10:18.685706   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:18.686092   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:19.185904   31648 type.go:168] "Request Body" body=""
	I1003 18:10:19.185995   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:19.186318   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:19.186371   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:19.685092   31648 type.go:168] "Request Body" body=""
	I1003 18:10:19.685164   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:19.685487   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:20.185213   31648 type.go:168] "Request Body" body=""
	I1003 18:10:20.185296   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:20.185633   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:20.685398   31648 type.go:168] "Request Body" body=""
	I1003 18:10:20.685475   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:20.685780   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:21.185636   31648 type.go:168] "Request Body" body=""
	I1003 18:10:21.185711   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:21.186047   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:21.685810   31648 type.go:168] "Request Body" body=""
	I1003 18:10:21.685874   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:21.686211   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:21.686273   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:22.184932   31648 type.go:168] "Request Body" body=""
	I1003 18:10:22.185016   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:22.185357   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:22.685073   31648 type.go:168] "Request Body" body=""
	I1003 18:10:22.685138   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:22.685450   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:23.185168   31648 type.go:168] "Request Body" body=""
	I1003 18:10:23.185239   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:23.185562   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:23.685280   31648 type.go:168] "Request Body" body=""
	I1003 18:10:23.685364   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:23.685684   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:24.185432   31648 type.go:168] "Request Body" body=""
	I1003 18:10:24.185494   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:24.185826   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:24.185890   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:24.685663   31648 type.go:168] "Request Body" body=""
	I1003 18:10:24.685735   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:24.686142   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:25.185900   31648 type.go:168] "Request Body" body=""
	I1003 18:10:25.185964   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:25.186274   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:25.685013   31648 type.go:168] "Request Body" body=""
	I1003 18:10:25.685093   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:25.685422   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:26.185213   31648 type.go:168] "Request Body" body=""
	I1003 18:10:26.185323   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:26.185654   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:26.685413   31648 type.go:168] "Request Body" body=""
	I1003 18:10:26.685482   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:26.685843   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:26.685908   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:27.185654   31648 type.go:168] "Request Body" body=""
	I1003 18:10:27.185733   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:27.186080   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:27.685901   31648 type.go:168] "Request Body" body=""
	I1003 18:10:27.685968   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:27.686301   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:28.185042   31648 type.go:168] "Request Body" body=""
	I1003 18:10:28.185109   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:28.185417   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:28.685129   31648 type.go:168] "Request Body" body=""
	I1003 18:10:28.685212   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:28.685544   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:29.185277   31648 type.go:168] "Request Body" body=""
	I1003 18:10:29.185350   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:29.185667   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:29.185717   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:29.685390   31648 type.go:168] "Request Body" body=""
	I1003 18:10:29.685463   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:29.685809   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:30.185653   31648 type.go:168] "Request Body" body=""
	I1003 18:10:30.185740   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:30.186077   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:30.685885   31648 type.go:168] "Request Body" body=""
	I1003 18:10:30.685950   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:30.686302   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:31.184960   31648 type.go:168] "Request Body" body=""
	I1003 18:10:31.185039   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:31.185351   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:31.685088   31648 type.go:168] "Request Body" body=""
	I1003 18:10:31.685183   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:31.685491   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:31.685553   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:32.185245   31648 type.go:168] "Request Body" body=""
	I1003 18:10:32.185311   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:32.185616   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:32.685334   31648 type.go:168] "Request Body" body=""
	I1003 18:10:32.685427   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:32.685753   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:33.185521   31648 type.go:168] "Request Body" body=""
	I1003 18:10:33.185585   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:33.185951   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:33.685776   31648 type.go:168] "Request Body" body=""
	I1003 18:10:33.685843   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:33.686164   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:33.686226   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:34.186008   31648 type.go:168] "Request Body" body=""
	I1003 18:10:34.186076   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:34.186390   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:34.685080   31648 type.go:168] "Request Body" body=""
	I1003 18:10:34.685151   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:34.685468   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:35.185199   31648 type.go:168] "Request Body" body=""
	I1003 18:10:35.185274   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:35.185624   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:35.685334   31648 type.go:168] "Request Body" body=""
	I1003 18:10:35.685407   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:35.685728   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:36.185543   31648 type.go:168] "Request Body" body=""
	I1003 18:10:36.185617   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:36.185950   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:36.186025   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:36.685767   31648 type.go:168] "Request Body" body=""
	I1003 18:10:36.685830   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:36.686160   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:37.185965   31648 type.go:168] "Request Body" body=""
	I1003 18:10:37.186062   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:37.186419   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:37.685168   31648 type.go:168] "Request Body" body=""
	I1003 18:10:37.685233   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:37.685563   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:38.185271   31648 type.go:168] "Request Body" body=""
	I1003 18:10:38.185345   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:38.185657   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:38.685369   31648 type.go:168] "Request Body" body=""
	I1003 18:10:38.685433   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:38.685746   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:38.685800   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:39.185560   31648 type.go:168] "Request Body" body=""
	I1003 18:10:39.185640   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:39.185997   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:39.685784   31648 type.go:168] "Request Body" body=""
	I1003 18:10:39.685851   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:39.686184   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:40.185949   31648 type.go:168] "Request Body" body=""
	I1003 18:10:40.186046   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:40.186401   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:40.685071   31648 type.go:168] "Request Body" body=""
	I1003 18:10:40.685152   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:40.685459   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:41.185267   31648 type.go:168] "Request Body" body=""
	I1003 18:10:41.185334   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:41.185637   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:41.185700   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:41.685380   31648 type.go:168] "Request Body" body=""
	I1003 18:10:41.685445   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:41.685830   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:42.185632   31648 type.go:168] "Request Body" body=""
	I1003 18:10:42.185724   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:42.186063   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:42.685859   31648 type.go:168] "Request Body" body=""
	I1003 18:10:42.685933   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:42.686273   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:43.185018   31648 type.go:168] "Request Body" body=""
	I1003 18:10:43.185089   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:43.185411   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:43.685086   31648 type.go:168] "Request Body" body=""
	I1003 18:10:43.685152   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:43.685478   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:43.685542   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:44.185259   31648 type.go:168] "Request Body" body=""
	I1003 18:10:44.185327   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:44.185679   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:44.685473   31648 type.go:168] "Request Body" body=""
	I1003 18:10:44.685537   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:44.685872   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:45.185684   31648 type.go:168] "Request Body" body=""
	I1003 18:10:45.185759   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:45.186086   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:45.685880   31648 type.go:168] "Request Body" body=""
	I1003 18:10:45.685945   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:45.686284   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:45.686349   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:46.184919   31648 type.go:168] "Request Body" body=""
	I1003 18:10:46.185021   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:46.185345   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:46.685069   31648 type.go:168] "Request Body" body=""
	I1003 18:10:46.685147   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:46.685496   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:47.185204   31648 type.go:168] "Request Body" body=""
	I1003 18:10:47.185304   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:47.185613   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:47.685395   31648 type.go:168] "Request Body" body=""
	I1003 18:10:47.685473   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:47.685844   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:48.185624   31648 type.go:168] "Request Body" body=""
	I1003 18:10:48.185707   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:48.186048   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:48.186105   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:48.685861   31648 type.go:168] "Request Body" body=""
	I1003 18:10:48.685948   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:48.686324   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:49.185066   31648 type.go:168] "Request Body" body=""
	I1003 18:10:49.185176   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:49.185503   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:49.685237   31648 type.go:168] "Request Body" body=""
	I1003 18:10:49.685317   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:49.685703   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:50.185462   31648 type.go:168] "Request Body" body=""
	I1003 18:10:50.185540   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:50.185875   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:50.685684   31648 type.go:168] "Request Body" body=""
	I1003 18:10:50.685764   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:50.686154   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:50.686209   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:51.185959   31648 type.go:168] "Request Body" body=""
	I1003 18:10:51.186061   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:51.186411   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:51.685154   31648 type.go:168] "Request Body" body=""
	I1003 18:10:51.685222   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:51.685506   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:52.185254   31648 type.go:168] "Request Body" body=""
	I1003 18:10:52.185335   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:52.185690   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:52.685398   31648 type.go:168] "Request Body" body=""
	I1003 18:10:52.685466   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:52.685770   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:53.185621   31648 type.go:168] "Request Body" body=""
	I1003 18:10:53.185692   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:53.186039   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:53.186109   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:53.685850   31648 type.go:168] "Request Body" body=""
	I1003 18:10:53.685914   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:53.686255   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:54.185017   31648 type.go:168] "Request Body" body=""
	I1003 18:10:54.185080   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:54.185397   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:54.685078   31648 type.go:168] "Request Body" body=""
	I1003 18:10:54.685145   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:54.685459   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:55.185159   31648 type.go:168] "Request Body" body=""
	I1003 18:10:55.185227   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:55.185528   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:55.685211   31648 type.go:168] "Request Body" body=""
	I1003 18:10:55.685279   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:55.685586   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:55.685652   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:56.185352   31648 type.go:168] "Request Body" body=""
	I1003 18:10:56.185430   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:56.185759   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:56.685531   31648 type.go:168] "Request Body" body=""
	I1003 18:10:56.685600   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:56.685922   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:57.185723   31648 type.go:168] "Request Body" body=""
	I1003 18:10:57.185811   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:57.186156   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:57.685922   31648 type.go:168] "Request Body" body=""
	I1003 18:10:57.686010   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:57.686316   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:57.686367   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:58.185097   31648 type.go:168] "Request Body" body=""
	I1003 18:10:58.185187   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:58.185535   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:58.685089   31648 type.go:168] "Request Body" body=""
	I1003 18:10:58.685158   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:58.685458   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:59.185180   31648 type.go:168] "Request Body" body=""
	I1003 18:10:59.185260   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:59.185605   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:59.685329   31648 type.go:168] "Request Body" body=""
	I1003 18:10:59.685409   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:59.685768   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:00.185577   31648 type.go:168] "Request Body" body=""
	I1003 18:11:00.185644   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:00.185968   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:00.186053   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:00.685767   31648 type.go:168] "Request Body" body=""
	I1003 18:11:00.685853   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:00.686208   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:01.185912   31648 type.go:168] "Request Body" body=""
	I1003 18:11:01.186001   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:01.186311   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:01.685060   31648 type.go:168] "Request Body" body=""
	I1003 18:11:01.685173   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:01.685511   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:02.185272   31648 type.go:168] "Request Body" body=""
	I1003 18:11:02.185343   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:02.185674   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:02.685366   31648 type.go:168] "Request Body" body=""
	I1003 18:11:02.685447   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:02.685807   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:02.685860   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:03.185586   31648 type.go:168] "Request Body" body=""
	I1003 18:11:03.185653   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:03.186010   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:03.685810   31648 type.go:168] "Request Body" body=""
	I1003 18:11:03.685892   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:03.686241   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:04.184939   31648 type.go:168] "Request Body" body=""
	I1003 18:11:04.185023   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:04.185312   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:04.685060   31648 type.go:168] "Request Body" body=""
	I1003 18:11:04.685143   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:04.685467   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:05.185189   31648 type.go:168] "Request Body" body=""
	I1003 18:11:05.185258   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:05.185567   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:05.185625   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:05.685299   31648 type.go:168] "Request Body" body=""
	I1003 18:11:05.685378   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:05.685703   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:06.185511   31648 type.go:168] "Request Body" body=""
	I1003 18:11:06.185600   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:06.185915   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:06.685750   31648 type.go:168] "Request Body" body=""
	I1003 18:11:06.685834   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:06.686186   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:07.185989   31648 type.go:168] "Request Body" body=""
	I1003 18:11:07.186058   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:07.186369   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:07.186436   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:07.685126   31648 type.go:168] "Request Body" body=""
	I1003 18:11:07.685203   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:07.685514   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:08.185223   31648 type.go:168] "Request Body" body=""
	I1003 18:11:08.185315   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:08.185627   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:08.685356   31648 type.go:168] "Request Body" body=""
	I1003 18:11:08.685469   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:08.685819   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:09.185588   31648 type.go:168] "Request Body" body=""
	I1003 18:11:09.185655   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:09.186048   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:09.685858   31648 type.go:168] "Request Body" body=""
	I1003 18:11:09.685945   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:09.686291   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:09.686344   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:10.185028   31648 type.go:168] "Request Body" body=""
	I1003 18:11:10.185112   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:10.185419   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:10.685125   31648 type.go:168] "Request Body" body=""
	I1003 18:11:10.685235   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:10.685580   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:11.185333   31648 type.go:168] "Request Body" body=""
	I1003 18:11:11.185400   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:11.185721   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:11.685427   31648 type.go:168] "Request Body" body=""
	I1003 18:11:11.685540   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:11.685876   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:12.185659   31648 type.go:168] "Request Body" body=""
	I1003 18:11:12.185756   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:12.186078   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:12.186142   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:12.685887   31648 type.go:168] "Request Body" body=""
	I1003 18:11:12.685959   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:12.686282   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:13.185003   31648 type.go:168] "Request Body" body=""
	I1003 18:11:13.185081   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:13.185409   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:13.685094   31648 type.go:168] "Request Body" body=""
	I1003 18:11:13.685164   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:13.685478   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:14.185184   31648 type.go:168] "Request Body" body=""
	I1003 18:11:14.185260   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:14.185598   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:14.685408   31648 type.go:168] "Request Body" body=""
	I1003 18:11:14.685477   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:14.685794   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:14.685865   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:15.185614   31648 type.go:168] "Request Body" body=""
	I1003 18:11:15.185690   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:15.186097   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:15.685915   31648 type.go:168] "Request Body" body=""
	I1003 18:11:15.686020   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:15.686331   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:16.185164   31648 type.go:168] "Request Body" body=""
	I1003 18:11:16.185233   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:16.185540   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:16.685230   31648 type.go:168] "Request Body" body=""
	I1003 18:11:16.685290   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:16.685601   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:17.185312   31648 type.go:168] "Request Body" body=""
	I1003 18:11:17.185380   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:17.185697   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:17.185779   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:17.685436   31648 type.go:168] "Request Body" body=""
	I1003 18:11:17.685502   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:17.685845   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:18.185654   31648 type.go:168] "Request Body" body=""
	I1003 18:11:18.185717   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:18.186072   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:18.685861   31648 type.go:168] "Request Body" body=""
	I1003 18:11:18.685924   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:18.686240   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:19.185000   31648 type.go:168] "Request Body" body=""
	I1003 18:11:19.185076   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:19.185392   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:19.685130   31648 type.go:168] "Request Body" body=""
	I1003 18:11:19.685199   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:19.685540   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:19.685603   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:20.185304   31648 type.go:168] "Request Body" body=""
	I1003 18:11:20.185368   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:20.185692   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:20.685437   31648 type.go:168] "Request Body" body=""
	I1003 18:11:20.685512   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:20.685889   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:21.185654   31648 type.go:168] "Request Body" body=""
	I1003 18:11:21.185736   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:21.186088   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:21.685864   31648 type.go:168] "Request Body" body=""
	I1003 18:11:21.685950   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:21.686257   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:21.686310   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:22.185029   31648 type.go:168] "Request Body" body=""
	I1003 18:11:22.185128   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:22.185448   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:22.685177   31648 type.go:168] "Request Body" body=""
	I1003 18:11:22.685257   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:22.685561   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:23.185277   31648 type.go:168] "Request Body" body=""
	I1003 18:11:23.185353   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:23.185666   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:23.685362   31648 type.go:168] "Request Body" body=""
	I1003 18:11:23.685435   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:23.685751   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:24.185475   31648 type.go:168] "Request Body" body=""
	I1003 18:11:24.185552   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:24.185910   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:24.185963   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:24.685584   31648 type.go:168] "Request Body" body=""
	I1003 18:11:24.685659   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:24.685971   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:25.185758   31648 type.go:168] "Request Body" body=""
	I1003 18:11:25.185842   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:25.186204   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:25.685956   31648 type.go:168] "Request Body" body=""
	I1003 18:11:25.686040   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:25.686348   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:26.185071   31648 type.go:168] "Request Body" body=""
	I1003 18:11:26.185144   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:26.185483   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:26.685189   31648 type.go:168] "Request Body" body=""
	I1003 18:11:26.685255   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:26.685555   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:26.685624   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:27.185293   31648 type.go:168] "Request Body" body=""
	I1003 18:11:27.185364   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:27.185670   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:27.685353   31648 type.go:168] "Request Body" body=""
	I1003 18:11:27.685417   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:27.685713   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:28.185462   31648 type.go:168] "Request Body" body=""
	I1003 18:11:28.185529   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:28.185838   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:28.685636   31648 type.go:168] "Request Body" body=""
	I1003 18:11:28.685711   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:28.686033   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:28.686095   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:29.185891   31648 type.go:168] "Request Body" body=""
	I1003 18:11:29.185959   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:29.186289   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:29.684999   31648 type.go:168] "Request Body" body=""
	I1003 18:11:29.685063   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:29.685358   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:30.185079   31648 type.go:168] "Request Body" body=""
	I1003 18:11:30.185147   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:30.185448   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:30.685153   31648 type.go:168] "Request Body" body=""
	I1003 18:11:30.685224   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:30.685542   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:31.185387   31648 type.go:168] "Request Body" body=""
	I1003 18:11:31.185470   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:31.185801   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:31.185869   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:31.685601   31648 type.go:168] "Request Body" body=""
	I1003 18:11:31.685665   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:31.686013   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:32.185823   31648 type.go:168] "Request Body" body=""
	I1003 18:11:32.185918   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:32.186314   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:32.685025   31648 type.go:168] "Request Body" body=""
	I1003 18:11:32.685090   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:32.685396   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:33.185093   31648 type.go:168] "Request Body" body=""
	I1003 18:11:33.185177   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:33.185492   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:33.685174   31648 type.go:168] "Request Body" body=""
	I1003 18:11:33.685294   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:33.685598   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:33.685653   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:34.185347   31648 type.go:168] "Request Body" body=""
	I1003 18:11:34.185424   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:34.185757   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:34.685584   31648 type.go:168] "Request Body" body=""
	I1003 18:11:34.685700   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:34.686040   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:35.185805   31648 type.go:168] "Request Body" body=""
	I1003 18:11:35.185867   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:35.186199   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:35.685954   31648 type.go:168] "Request Body" body=""
	I1003 18:11:35.686050   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:35.686359   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:35.686411   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:36.185172   31648 type.go:168] "Request Body" body=""
	I1003 18:11:36.185238   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:36.185535   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:36.685215   31648 type.go:168] "Request Body" body=""
	I1003 18:11:36.685302   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:36.685612   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:37.185339   31648 type.go:168] "Request Body" body=""
	I1003 18:11:37.185403   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:37.185728   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:37.685401   31648 type.go:168] "Request Body" body=""
	I1003 18:11:37.685477   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:37.685800   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:38.185642   31648 type.go:168] "Request Body" body=""
	I1003 18:11:38.185720   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:38.186056   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:38.186115   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:38.685846   31648 type.go:168] "Request Body" body=""
	I1003 18:11:38.685908   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:38.686230   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:39.184965   31648 type.go:168] "Request Body" body=""
	I1003 18:11:39.185068   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:39.185389   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:39.685076   31648 type.go:168] "Request Body" body=""
	I1003 18:11:39.685138   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:39.685429   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:40.185151   31648 type.go:168] "Request Body" body=""
	I1003 18:11:40.185227   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:40.185552   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:40.685234   31648 type.go:168] "Request Body" body=""
	I1003 18:11:40.685299   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:40.685612   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:40.685679   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:41.185407   31648 type.go:168] "Request Body" body=""
	I1003 18:11:41.185475   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:41.185810   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:41.685588   31648 type.go:168] "Request Body" body=""
	I1003 18:11:41.685663   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:41.685999   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:42.185821   31648 type.go:168] "Request Body" body=""
	I1003 18:11:42.185909   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:42.186287   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:42.685035   31648 type.go:168] "Request Body" body=""
	I1003 18:11:42.685109   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:42.685460   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:43.185163   31648 type.go:168] "Request Body" body=""
	I1003 18:11:43.185226   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:43.185569   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:43.185640   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:43.685320   31648 type.go:168] "Request Body" body=""
	I1003 18:11:43.685387   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:43.685687   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:44.185376   31648 type.go:168] "Request Body" body=""
	I1003 18:11:44.185445   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:44.185795   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:44.685599   31648 type.go:168] "Request Body" body=""
	I1003 18:11:44.685672   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:44.686013   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:45.185797   31648 type.go:168] "Request Body" body=""
	I1003 18:11:45.185863   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:45.186210   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:45.186272   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:45.684943   31648 type.go:168] "Request Body" body=""
	I1003 18:11:45.685023   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:45.685323   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:46.184972   31648 type.go:168] "Request Body" body=""
	I1003 18:11:46.185063   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:46.185368   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:46.685078   31648 type.go:168] "Request Body" body=""
	I1003 18:11:46.685143   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:46.685436   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:47.185171   31648 type.go:168] "Request Body" body=""
	I1003 18:11:47.185237   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:47.185530   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:47.685229   31648 type.go:168] "Request Body" body=""
	I1003 18:11:47.685292   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:47.685573   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:47.685625   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:48.185308   31648 type.go:168] "Request Body" body=""
	I1003 18:11:48.185378   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:48.185726   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:48.685435   31648 type.go:168] "Request Body" body=""
	I1003 18:11:48.685502   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:48.685818   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:49.185572   31648 type.go:168] "Request Body" body=""
	I1003 18:11:49.185639   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:49.185951   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:49.685755   31648 type.go:168] "Request Body" body=""
	I1003 18:11:49.685820   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:49.686165   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:49.686226   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:50.185972   31648 type.go:168] "Request Body" body=""
	I1003 18:11:50.186049   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:50.186347   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:50.685077   31648 type.go:168] "Request Body" body=""
	I1003 18:11:50.685149   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:50.685487   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:51.185355   31648 type.go:168] "Request Body" body=""
	I1003 18:11:51.185423   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:51.185749   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:51.685438   31648 type.go:168] "Request Body" body=""
	I1003 18:11:51.685502   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:51.685808   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:52.185581   31648 type.go:168] "Request Body" body=""
	I1003 18:11:52.185644   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:52.185967   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:52.186043   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:52.685763   31648 type.go:168] "Request Body" body=""
	I1003 18:11:52.685866   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:52.686218   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:53.184953   31648 type.go:168] "Request Body" body=""
	I1003 18:11:53.185051   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:53.185365   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:53.685069   31648 type.go:168] "Request Body" body=""
	I1003 18:11:53.685143   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:53.685457   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:54.185161   31648 type.go:168] "Request Body" body=""
	I1003 18:11:54.185226   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:54.185562   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:54.685310   31648 type.go:168] "Request Body" body=""
	I1003 18:11:54.685387   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:54.685726   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:54.685776   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:55.185417   31648 type.go:168] "Request Body" body=""
	I1003 18:11:55.185483   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:55.185815   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:55.685573   31648 type.go:168] "Request Body" body=""
	I1003 18:11:55.685677   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:55.686027   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:56.185731   31648 type.go:168] "Request Body" body=""
	I1003 18:11:56.185792   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:56.186116   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:56.685906   31648 type.go:168] "Request Body" body=""
	I1003 18:11:56.686004   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:56.686321   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:56.686379   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:57.185067   31648 type.go:168] "Request Body" body=""
	I1003 18:11:57.185134   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:57.185426   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:57.685144   31648 type.go:168] "Request Body" body=""
	I1003 18:11:57.685226   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:57.685539   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:58.185226   31648 type.go:168] "Request Body" body=""
	I1003 18:11:58.185291   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:58.185597   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:58.685288   31648 type.go:168] "Request Body" body=""
	I1003 18:11:58.685373   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:58.685689   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:59.185369   31648 type.go:168] "Request Body" body=""
	I1003 18:11:59.185441   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:59.185768   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:59.185831   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:59.685575   31648 type.go:168] "Request Body" body=""
	I1003 18:11:59.685674   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:59.686024   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:00.185851   31648 type.go:168] "Request Body" body=""
	I1003 18:12:00.185922   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:00.186234   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:00.684953   31648 type.go:168] "Request Body" body=""
	I1003 18:12:00.685062   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:00.685403   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:01.185179   31648 type.go:168] "Request Body" body=""
	I1003 18:12:01.185248   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:01.185572   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:01.685293   31648 type.go:168] "Request Body" body=""
	I1003 18:12:01.685376   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:01.685710   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:01.685766   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:02.185411   31648 type.go:168] "Request Body" body=""
	I1003 18:12:02.185478   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:02.185826   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:02.685596   31648 type.go:168] "Request Body" body=""
	I1003 18:12:02.685688   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:02.686031   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:03.185821   31648 type.go:168] "Request Body" body=""
	I1003 18:12:03.185887   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:03.186235   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:03.684941   31648 type.go:168] "Request Body" body=""
	I1003 18:12:03.685043   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:03.685366   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:04.185065   31648 type.go:168] "Request Body" body=""
	I1003 18:12:04.185133   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:04.185448   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:04.185500   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:04.685256   31648 type.go:168] "Request Body" body=""
	I1003 18:12:04.685332   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:04.685650   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:05.185329   31648 type.go:168] "Request Body" body=""
	I1003 18:12:05.185398   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:05.185718   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:05.685410   31648 type.go:168] "Request Body" body=""
	I1003 18:12:05.685475   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:05.685794   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:06.185563   31648 type.go:168] "Request Body" body=""
	I1003 18:12:06.185632   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:06.185948   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:06.186035   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:06.685752   31648 type.go:168] "Request Body" body=""
	I1003 18:12:06.685824   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:06.686177   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:07.185942   31648 type.go:168] "Request Body" body=""
	I1003 18:12:07.186020   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:07.186318   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:07.685031   31648 type.go:168] "Request Body" body=""
	I1003 18:12:07.685100   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:07.685424   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:08.185310   31648 type.go:168] "Request Body" body=""
	I1003 18:12:08.185557   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:08.186174   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:08.186246   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:08.685021   31648 type.go:168] "Request Body" body=""
	I1003 18:12:08.685163   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:08.685624   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:09.185153   31648 type.go:168] "Request Body" body=""
	I1003 18:12:09.185228   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:09.185529   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:09.685080   31648 type.go:168] "Request Body" body=""
	I1003 18:12:09.685150   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:09.685445   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:10.185696   31648 type.go:168] "Request Body" body=""
	I1003 18:12:10.185761   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:10.186171   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:10.685822   31648 type.go:168] "Request Body" body=""
	I1003 18:12:10.685891   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:10.686201   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:10.686266   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:11.184920   31648 type.go:168] "Request Body" body=""
	I1003 18:12:11.185025   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:11.185378   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:11.684920   31648 type.go:168] "Request Body" body=""
	I1003 18:12:11.685033   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:11.685353   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:12.186032   31648 type.go:168] "Request Body" body=""
	I1003 18:12:12.186096   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:12.186405   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:12.685015   31648 type.go:168] "Request Body" body=""
	I1003 18:12:12.685091   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:12.685409   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:13.185019   31648 type.go:168] "Request Body" body=""
	I1003 18:12:13.185093   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:13.185404   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:13.185456   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:13.685017   31648 type.go:168] "Request Body" body=""
	I1003 18:12:13.685098   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:13.685420   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:14.185003   31648 type.go:168] "Request Body" body=""
	I1003 18:12:14.185073   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:14.185375   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:14.685353   31648 type.go:168] "Request Body" body=""
	I1003 18:12:14.685425   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:14.685732   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:15.185329   31648 type.go:168] "Request Body" body=""
	I1003 18:12:15.185393   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:15.185699   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:15.185756   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:15.685287   31648 type.go:168] "Request Body" body=""
	I1003 18:12:15.685366   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:15.685696   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:16.185545   31648 type.go:168] "Request Body" body=""
	I1003 18:12:16.185614   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:16.185938   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:16.685555   31648 type.go:168] "Request Body" body=""
	I1003 18:12:16.685672   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:16.686031   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:17.185708   31648 type.go:168] "Request Body" body=""
	I1003 18:12:17.185775   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:17.186072   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:17.186122   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:17.685745   31648 type.go:168] "Request Body" body=""
	I1003 18:12:17.685826   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:17.686169   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:18.185895   31648 type.go:168] "Request Body" body=""
	I1003 18:12:18.185966   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:18.186347   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:18.685985   31648 type.go:168] "Request Body" body=""
	I1003 18:12:18.686065   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:18.686377   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:19.185028   31648 type.go:168] "Request Body" body=""
	I1003 18:12:19.185094   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:19.185404   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:19.684993   31648 type.go:168] "Request Body" body=""
	I1003 18:12:19.685067   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:19.685365   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:19.685419   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:20.184966   31648 type.go:168] "Request Body" body=""
	I1003 18:12:20.185059   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:20.185369   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:20.684968   31648 type.go:168] "Request Body" body=""
	I1003 18:12:20.685064   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:20.685377   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:21.185199   31648 type.go:168] "Request Body" body=""
	I1003 18:12:21.185268   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:21.185584   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:21.685182   31648 type.go:168] "Request Body" body=""
	I1003 18:12:21.685270   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:21.685589   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:21.685651   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:22.185158   31648 type.go:168] "Request Body" body=""
	I1003 18:12:22.185226   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:22.185552   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:22.685092   31648 type.go:168] "Request Body" body=""
	I1003 18:12:22.685168   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:22.685483   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:23.185069   31648 type.go:168] "Request Body" body=""
	I1003 18:12:23.185132   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:23.185442   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:23.685074   31648 type.go:168] "Request Body" body=""
	I1003 18:12:23.685147   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:23.685472   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:24.185079   31648 type.go:168] "Request Body" body=""
	I1003 18:12:24.185152   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:24.185468   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:24.185523   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:24.685267   31648 type.go:168] "Request Body" body=""
	I1003 18:12:24.685328   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:24.685633   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:25.185201   31648 type.go:168] "Request Body" body=""
	I1003 18:12:25.185267   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:25.185577   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:25.685147   31648 type.go:168] "Request Body" body=""
	I1003 18:12:25.685221   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:25.685537   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:26.185376   31648 type.go:168] "Request Body" body=""
	I1003 18:12:26.185445   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:26.185763   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:26.185815   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:26.685320   31648 type.go:168] "Request Body" body=""
	I1003 18:12:26.685398   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:26.685732   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:27.185386   31648 type.go:168] "Request Body" body=""
	I1003 18:12:27.185456   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:27.185774   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:27.685332   31648 type.go:168] "Request Body" body=""
	I1003 18:12:27.685409   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:27.685755   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:28.185323   31648 type.go:168] "Request Body" body=""
	I1003 18:12:28.185387   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:28.185709   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:28.685266   31648 type.go:168] "Request Body" body=""
	I1003 18:12:28.685343   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:28.685731   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:28.685797   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:29.185293   31648 type.go:168] "Request Body" body=""
	I1003 18:12:29.185362   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:29.185681   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:29.685253   31648 type.go:168] "Request Body" body=""
	I1003 18:12:29.685341   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:29.685670   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:30.185273   31648 type.go:168] "Request Body" body=""
	I1003 18:12:30.185336   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:30.185638   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:30.685205   31648 type.go:168] "Request Body" body=""
	I1003 18:12:30.685285   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:30.685586   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:31.185396   31648 type.go:168] "Request Body" body=""
	I1003 18:12:31.185471   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:31.185833   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:31.185890   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:31.685435   31648 type.go:168] "Request Body" body=""
	I1003 18:12:31.685517   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:31.685844   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:32.185392   31648 type.go:168] "Request Body" body=""
	I1003 18:12:32.185458   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:32.185764   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:32.685377   31648 type.go:168] "Request Body" body=""
	I1003 18:12:32.685464   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:32.685795   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:33.185359   31648 type.go:168] "Request Body" body=""
	I1003 18:12:33.185426   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:33.185740   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:33.685326   31648 type.go:168] "Request Body" body=""
	I1003 18:12:33.685407   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:33.685749   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:33.685805   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:34.185324   31648 type.go:168] "Request Body" body=""
	I1003 18:12:34.185391   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:34.185798   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:34.685697   31648 type.go:168] "Request Body" body=""
	I1003 18:12:34.685778   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:34.686147   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:35.185833   31648 type.go:168] "Request Body" body=""
	I1003 18:12:35.185908   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:35.186230   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:35.685876   31648 type.go:168] "Request Body" body=""
	I1003 18:12:35.685957   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:35.686342   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:35.686404   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:36.185025   31648 type.go:168] "Request Body" body=""
	I1003 18:12:36.185106   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:36.185455   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:36.685049   31648 type.go:168] "Request Body" body=""
	I1003 18:12:36.685129   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:36.685448   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:37.185016   31648 type.go:168] "Request Body" body=""
	I1003 18:12:37.185089   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:37.185408   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:37.685018   31648 type.go:168] "Request Body" body=""
	I1003 18:12:37.685090   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:37.685418   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:38.184968   31648 type.go:168] "Request Body" body=""
	I1003 18:12:38.185058   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:38.185369   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:38.185426   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:38.684922   31648 type.go:168] "Request Body" body=""
	I1003 18:12:38.685020   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:38.685336   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:39.186015   31648 type.go:168] "Request Body" body=""
	I1003 18:12:39.186082   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:39.186391   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:39.684964   31648 type.go:168] "Request Body" body=""
	I1003 18:12:39.685064   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:39.685384   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:40.185016   31648 type.go:168] "Request Body" body=""
	I1003 18:12:40.185081   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:40.185399   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:40.185451   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:40.685023   31648 type.go:168] "Request Body" body=""
	I1003 18:12:40.685100   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:40.685415   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:41.185286   31648 type.go:168] "Request Body" body=""
	I1003 18:12:41.185356   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:41.185704   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:41.685271   31648 type.go:168] "Request Body" body=""
	I1003 18:12:41.685345   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:41.685676   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:42.185232   31648 type.go:168] "Request Body" body=""
	I1003 18:12:42.185297   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:42.185603   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:42.185677   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:42.685166   31648 type.go:168] "Request Body" body=""
	I1003 18:12:42.685261   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:42.685582   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:43.185142   31648 type.go:168] "Request Body" body=""
	I1003 18:12:43.185210   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:43.185530   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:43.685335   31648 type.go:168] "Request Body" body=""
	I1003 18:12:43.685517   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:43.686011   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:44.185546   31648 type.go:168] "Request Body" body=""
	I1003 18:12:44.185637   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:44.185952   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:44.186027   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:44.685689   31648 type.go:168] "Request Body" body=""
	I1003 18:12:44.685790   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:44.686111   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:45.185834   31648 type.go:168] "Request Body" body=""
	I1003 18:12:45.185923   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:45.186247   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:45.685720   31648 type.go:168] "Request Body" body=""
	I1003 18:12:45.685788   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:45.686128   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:46.185754   31648 type.go:168] "Request Body" body=""
	I1003 18:12:46.185839   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:46.186221   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:46.186277   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:46.685820   31648 type.go:168] "Request Body" body=""
	I1003 18:12:46.685886   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:46.686208   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:47.185851   31648 type.go:168] "Request Body" body=""
	I1003 18:12:47.185923   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:47.186245   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:47.685882   31648 type.go:168] "Request Body" body=""
	I1003 18:12:47.685947   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:47.686262   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:48.185908   31648 type.go:168] "Request Body" body=""
	I1003 18:12:48.185999   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:48.186381   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:48.186430   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:48.686002   31648 type.go:168] "Request Body" body=""
	I1003 18:12:48.686088   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:48.686447   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:49.185029   31648 type.go:168] "Request Body" body=""
	I1003 18:12:49.185102   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:49.185407   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:49.685003   31648 type.go:168] "Request Body" body=""
	I1003 18:12:49.685079   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:49.685399   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:50.184995   31648 type.go:168] "Request Body" body=""
	I1003 18:12:50.185063   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:50.185376   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:50.685005   31648 type.go:168] "Request Body" body=""
	I1003 18:12:50.685086   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:50.685402   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:50.685457   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:51.185264   31648 type.go:168] "Request Body" body=""
	I1003 18:12:51.185331   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:51.185656   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:51.685186   31648 type.go:168] "Request Body" body=""
	I1003 18:12:51.685261   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:51.685581   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:52.185171   31648 type.go:168] "Request Body" body=""
	I1003 18:12:52.185246   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:52.185567   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:52.685150   31648 type.go:168] "Request Body" body=""
	I1003 18:12:52.685238   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:52.685565   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:52.685619   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:53.185114   31648 type.go:168] "Request Body" body=""
	I1003 18:12:53.185178   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:53.185492   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:53.685072   31648 type.go:168] "Request Body" body=""
	I1003 18:12:53.685148   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:53.685473   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:54.185075   31648 type.go:168] "Request Body" body=""
	I1003 18:12:54.185146   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:54.185455   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:54.685278   31648 type.go:168] "Request Body" body=""
	I1003 18:12:54.685361   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:54.685694   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:54.685749   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:55.185253   31648 type.go:168] "Request Body" body=""
	I1003 18:12:55.185324   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:55.185627   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:55.685205   31648 type.go:168] "Request Body" body=""
	I1003 18:12:55.685291   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:55.685628   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:56.185471   31648 type.go:168] "Request Body" body=""
	I1003 18:12:56.185542   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:56.185859   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:56.685418   31648 type.go:168] "Request Body" body=""
	I1003 18:12:56.685501   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:56.685842   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:56.685903   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:57.185408   31648 type.go:168] "Request Body" body=""
	I1003 18:12:57.185483   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:57.185825   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:57.685392   31648 type.go:168] "Request Body" body=""
	I1003 18:12:57.685471   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:57.685812   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:58.185364   31648 type.go:168] "Request Body" body=""
	I1003 18:12:58.185431   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:58.185736   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:58.685296   31648 type.go:168] "Request Body" body=""
	I1003 18:12:58.685379   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:58.685735   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:59.185312   31648 type.go:168] "Request Body" body=""
	I1003 18:12:59.185381   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:59.185710   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:59.185769   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:59.685328   31648 type.go:168] "Request Body" body=""
	I1003 18:12:59.685404   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:59.685769   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:00.185320   31648 type.go:168] "Request Body" body=""
	I1003 18:13:00.185386   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:00.185713   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:00.685362   31648 type.go:168] "Request Body" body=""
	I1003 18:13:00.685457   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:00.685823   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:01.185697   31648 type.go:168] "Request Body" body=""
	I1003 18:13:01.185765   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:01.186114   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:01.186172   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:01.685762   31648 type.go:168] "Request Body" body=""
	I1003 18:13:01.685852   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:01.686240   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:02.185865   31648 type.go:168] "Request Body" body=""
	I1003 18:13:02.185951   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:02.186283   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:02.685917   31648 type.go:168] "Request Body" body=""
	I1003 18:13:02.686014   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:02.686332   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:03.185942   31648 type.go:168] "Request Body" body=""
	I1003 18:13:03.186032   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:03.186345   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:03.186397   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:03.684942   31648 type.go:168] "Request Body" body=""
	I1003 18:13:03.685055   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:03.685383   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:04.184939   31648 type.go:168] "Request Body" body=""
	I1003 18:13:04.185041   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:04.185351   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:04.685279   31648 type.go:168] "Request Body" body=""
	I1003 18:13:04.685358   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:04.685695   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:05.185233   31648 type.go:168] "Request Body" body=""
	I1003 18:13:05.185306   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:05.185608   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:05.685179   31648 type.go:168] "Request Body" body=""
	I1003 18:13:05.685255   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:05.685582   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:05.685657   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:06.185409   31648 type.go:168] "Request Body" body=""
	I1003 18:13:06.185478   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:06.185807   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:06.685397   31648 type.go:168] "Request Body" body=""
	I1003 18:13:06.685483   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:06.685824   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:07.185410   31648 type.go:168] "Request Body" body=""
	I1003 18:13:07.185478   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:07.185799   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:07.685361   31648 type.go:168] "Request Body" body=""
	I1003 18:13:07.685444   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:07.685776   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:07.685829   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:08.185354   31648 type.go:168] "Request Body" body=""
	I1003 18:13:08.185422   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:08.185738   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:08.685299   31648 type.go:168] "Request Body" body=""
	I1003 18:13:08.685380   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:08.685725   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:09.185279   31648 type.go:168] "Request Body" body=""
	I1003 18:13:09.185348   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:09.185678   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:09.685236   31648 type.go:168] "Request Body" body=""
	I1003 18:13:09.685312   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:09.685643   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:10.185169   31648 type.go:168] "Request Body" body=""
	I1003 18:13:10.185241   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:10.185552   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:10.185605   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:10.685136   31648 type.go:168] "Request Body" body=""
	I1003 18:13:10.685223   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:10.685575   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:11.185384   31648 type.go:168] "Request Body" body=""
	I1003 18:13:11.185459   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:11.185788   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:11.685352   31648 type.go:168] "Request Body" body=""
	I1003 18:13:11.685433   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:11.685753   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:12.185074   31648 type.go:168] "Request Body" body=""
	I1003 18:13:12.185141   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:12.185467   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:12.685018   31648 type.go:168] "Request Body" body=""
	I1003 18:13:12.685103   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:12.685412   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:12.685475   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:13.184997   31648 type.go:168] "Request Body" body=""
	I1003 18:13:13.185070   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:13.185403   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:13.684967   31648 type.go:168] "Request Body" body=""
	I1003 18:13:13.685061   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:13.685364   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:14.184923   31648 type.go:168] "Request Body" body=""
	I1003 18:13:14.185026   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:14.185364   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:14.685214   31648 type.go:168] "Request Body" body=""
	I1003 18:13:14.685280   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:14.685641   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:14.685714   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:15.185156   31648 type.go:168] "Request Body" body=""
	I1003 18:13:15.185255   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:15.185584   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:15.685142   31648 type.go:168] "Request Body" body=""
	I1003 18:13:15.685204   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:15.685537   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:16.185388   31648 type.go:168] "Request Body" body=""
	I1003 18:13:16.185470   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:16.185814   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:16.685411   31648 type.go:168] "Request Body" body=""
	I1003 18:13:16.685497   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:16.685863   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:16.685936   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:17.185442   31648 type.go:168] "Request Body" body=""
	I1003 18:13:17.185509   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:17.185829   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:17.685415   31648 type.go:168] "Request Body" body=""
	I1003 18:13:17.685525   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:17.685881   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:18.185495   31648 type.go:168] "Request Body" body=""
	I1003 18:13:18.185563   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:18.185876   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:18.685159   31648 type.go:168] "Request Body" body=""
	I1003 18:13:18.685230   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:18.685527   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:19.185084   31648 type.go:168] "Request Body" body=""
	I1003 18:13:19.185161   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:19.185450   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:19.185506   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:19.685103   31648 type.go:168] "Request Body" body=""
	I1003 18:13:19.685191   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:19.685616   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:20.185169   31648 type.go:168] "Request Body" body=""
	I1003 18:13:20.185250   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:20.185540   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:20.685137   31648 type.go:168] "Request Body" body=""
	I1003 18:13:20.685209   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:20.685542   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:21.185328   31648 type.go:168] "Request Body" body=""
	I1003 18:13:21.185409   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:21.185747   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:21.185800   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:21.685330   31648 type.go:168] "Request Body" body=""
	I1003 18:13:21.685393   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:21.685693   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:22.185267   31648 type.go:168] "Request Body" body=""
	I1003 18:13:22.185361   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:22.185713   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:22.685319   31648 type.go:168] "Request Body" body=""
	I1003 18:13:22.685385   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:22.685724   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:23.185388   31648 type.go:168] "Request Body" body=""
	I1003 18:13:23.185472   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:23.185812   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:23.185875   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:23.685447   31648 type.go:168] "Request Body" body=""
	I1003 18:13:23.685515   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:23.685833   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:24.185390   31648 type.go:168] "Request Body" body=""
	I1003 18:13:24.185457   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:24.185762   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:24.685669   31648 type.go:168] "Request Body" body=""
	I1003 18:13:24.685745   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:24.686090   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:25.185723   31648 type.go:168] "Request Body" body=""
	I1003 18:13:25.185792   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:25.186120   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:25.186180   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:25.685886   31648 type.go:168] "Request Body" body=""
	I1003 18:13:25.685961   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:25.686311   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:26.185007   31648 type.go:168] "Request Body" body=""
	I1003 18:13:26.185071   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:26.185380   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:26.684952   31648 type.go:168] "Request Body" body=""
	I1003 18:13:26.685041   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:26.685347   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:27.185970   31648 type.go:168] "Request Body" body=""
	I1003 18:13:27.186046   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:27.186356   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:27.186405   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:27.685041   31648 type.go:168] "Request Body" body=""
	I1003 18:13:27.685106   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:27.685416   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:28.185003   31648 type.go:168] "Request Body" body=""
	I1003 18:13:28.185070   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:28.185403   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:28.684968   31648 type.go:168] "Request Body" body=""
	I1003 18:13:28.685055   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:28.685378   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:29.184912   31648 type.go:168] "Request Body" body=""
	I1003 18:13:29.185004   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:29.185313   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:29.686012   31648 type.go:168] "Request Body" body=""
	I1003 18:13:29.686076   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:29.686383   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:29.686435   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:30.184929   31648 type.go:168] "Request Body" body=""
	I1003 18:13:30.185073   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:30.185387   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:30.684930   31648 type.go:168] "Request Body" body=""
	I1003 18:13:30.685049   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:30.685367   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:31.185212   31648 type.go:168] "Request Body" body=""
	I1003 18:13:31.185277   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:31.185571   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:31.685142   31648 type.go:168] "Request Body" body=""
	I1003 18:13:31.685208   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:31.685504   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:32.185085   31648 type.go:168] "Request Body" body=""
	I1003 18:13:32.185151   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:32.185469   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:32.185524   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:32.685051   31648 type.go:168] "Request Body" body=""
	I1003 18:13:32.685118   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:32.685424   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:33.185022   31648 type.go:168] "Request Body" body=""
	I1003 18:13:33.185092   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:33.185392   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:33.684962   31648 type.go:168] "Request Body" body=""
	I1003 18:13:33.685058   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:33.685365   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:34.184958   31648 type.go:168] "Request Body" body=""
	I1003 18:13:34.185041   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:34.185342   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:34.685149   31648 type.go:168] "Request Body" body=""
	I1003 18:13:34.685221   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:34.685506   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:34.685560   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:35.185096   31648 type.go:168] "Request Body" body=""
	I1003 18:13:35.185162   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:35.185507   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:35.685072   31648 type.go:168] "Request Body" body=""
	I1003 18:13:35.685138   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:35.685436   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:36.185249   31648 type.go:168] "Request Body" body=""
	I1003 18:13:36.185312   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:36.185619   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:36.685207   31648 type.go:168] "Request Body" body=""
	I1003 18:13:36.685270   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:36.685603   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:36.685664   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:37.185187   31648 type.go:168] "Request Body" body=""
	I1003 18:13:37.185258   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:37.185604   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:37.685170   31648 type.go:168] "Request Body" body=""
	I1003 18:13:37.685238   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:37.685540   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:38.185094   31648 type.go:168] "Request Body" body=""
	I1003 18:13:38.185165   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:38.185480   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:38.685085   31648 type.go:168] "Request Body" body=""
	I1003 18:13:38.685154   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:38.685491   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:39.185087   31648 type.go:168] "Request Body" body=""
	I1003 18:13:39.185161   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:39.185473   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:39.185530   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:39.685041   31648 type.go:168] "Request Body" body=""
	I1003 18:13:39.685104   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:39.685443   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:40.184993   31648 type.go:168] "Request Body" body=""
	I1003 18:13:40.185060   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:40.185369   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:40.684957   31648 type.go:168] "Request Body" body=""
	I1003 18:13:40.685046   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:40.685391   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:41.185256   31648 type.go:168] "Request Body" body=""
	I1003 18:13:41.185323   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:41.185632   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:41.185691   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:41.685166   31648 type.go:168] "Request Body" body=""
	I1003 18:13:41.685236   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:41.685524   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:42.185147   31648 type.go:168] "Request Body" body=""
	I1003 18:13:42.185215   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:42.185512   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:42.685072   31648 type.go:168] "Request Body" body=""
	I1003 18:13:42.685137   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:42.685438   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:43.185039   31648 type.go:168] "Request Body" body=""
	I1003 18:13:43.185104   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:43.185400   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:43.684960   31648 type.go:168] "Request Body" body=""
	I1003 18:13:43.685045   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:43.685352   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:43.685405   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:44.184941   31648 type.go:168] "Request Body" body=""
	I1003 18:13:44.185024   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:44.185317   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:44.685052   31648 type.go:168] "Request Body" body=""
	I1003 18:13:44.685120   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:44.685425   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:45.185055   31648 type.go:168] "Request Body" body=""
	I1003 18:13:45.185131   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:45.185445   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:45.685028   31648 type.go:168] "Request Body" body=""
	I1003 18:13:45.685092   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:45.685396   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:45.685450   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:46.185196   31648 type.go:168] "Request Body" body=""
	I1003 18:13:46.185259   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:46.185598   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:46.685146   31648 type.go:168] "Request Body" body=""
	I1003 18:13:46.685207   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:46.685520   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:47.185085   31648 type.go:168] "Request Body" body=""
	I1003 18:13:47.185146   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:47.185435   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:47.685023   31648 type.go:168] "Request Body" body=""
	I1003 18:13:47.685083   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:47.685387   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:48.184938   31648 type.go:168] "Request Body" body=""
	I1003 18:13:48.185024   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:48.185317   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:48.185366   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:48.685968   31648 type.go:168] "Request Body" body=""
	I1003 18:13:48.686071   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:48.686392   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:49.184927   31648 type.go:168] "Request Body" body=""
	I1003 18:13:49.185007   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:49.185301   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:49.685951   31648 type.go:168] "Request Body" body=""
	I1003 18:13:49.686058   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:49.686375   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:50.185987   31648 type.go:168] "Request Body" body=""
	I1003 18:13:50.186049   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:50.186339   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:50.186393   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:50.686008   31648 type.go:168] "Request Body" body=""
	I1003 18:13:50.686095   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:50.686413   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:51.185213   31648 type.go:168] "Request Body" body=""
	I1003 18:13:51.185281   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:51.185558   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:51.685097   31648 type.go:168] "Request Body" body=""
	I1003 18:13:51.685183   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:51.685518   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:52.185069   31648 type.go:168] "Request Body" body=""
	I1003 18:13:52.185132   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:52.185409   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:52.685038   31648 type.go:168] "Request Body" body=""
	I1003 18:13:52.685113   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:52.685416   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:52.685468   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:53.184948   31648 type.go:168] "Request Body" body=""
	I1003 18:13:53.185026   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:53.185309   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:53.685950   31648 type.go:168] "Request Body" body=""
	I1003 18:13:53.686043   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:53.686348   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:54.185948   31648 type.go:168] "Request Body" body=""
	I1003 18:13:54.186022   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:54.186302   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:54.685064   31648 type.go:168] "Request Body" body=""
	I1003 18:13:54.685138   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:54.685429   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:54.685486   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:55.185055   31648 type.go:168] "Request Body" body=""
	I1003 18:13:55.185122   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:55.185388   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:55.685066   31648 type.go:168] "Request Body" body=""
	I1003 18:13:55.685164   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:55.685462   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:56.185338   31648 type.go:168] "Request Body" body=""
	I1003 18:13:56.185406   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:56.185704   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:56.685239   31648 type.go:168] "Request Body" body=""
	I1003 18:13:56.685304   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:56.685629   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:56.685684   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:57.185240   31648 type.go:168] "Request Body" body=""
	I1003 18:13:57.185305   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:57.185635   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:57.685223   31648 type.go:168] "Request Body" body=""
	I1003 18:13:57.685287   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:57.685578   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:58.185123   31648 type.go:168] "Request Body" body=""
	I1003 18:13:58.185189   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:58.185504   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:58.685074   31648 type.go:168] "Request Body" body=""
	I1003 18:13:58.685137   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:58.685464   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:59.185038   31648 type.go:168] "Request Body" body=""
	I1003 18:13:59.185102   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:59.185391   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:59.185441   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:59.684997   31648 type.go:168] "Request Body" body=""
	I1003 18:13:59.685066   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:59.685383   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:00.184957   31648 type.go:168] "Request Body" body=""
	I1003 18:14:00.185041   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:00.185348   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:00.685990   31648 type.go:168] "Request Body" body=""
	I1003 18:14:00.686052   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:00.686352   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:01.185220   31648 type.go:168] "Request Body" body=""
	I1003 18:14:01.185292   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:01.185619   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:14:01.185673   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:14:01.685170   31648 type.go:168] "Request Body" body=""
	I1003 18:14:01.685244   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:01.685572   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:02.185133   31648 type.go:168] "Request Body" body=""
	I1003 18:14:02.185197   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:02.185506   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:02.685118   31648 type.go:168] "Request Body" body=""
	I1003 18:14:02.685184   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:02.685488   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:03.185090   31648 type.go:168] "Request Body" body=""
	I1003 18:14:03.185159   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:03.185488   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:03.685055   31648 type.go:168] "Request Body" body=""
	I1003 18:14:03.685119   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:03.685428   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:14:03.685480   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:14:04.185061   31648 type.go:168] "Request Body" body=""
	I1003 18:14:04.185131   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:04.185458   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:04.685298   31648 type.go:168] "Request Body" body=""
	I1003 18:14:04.685366   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:04.685670   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:05.185278   31648 type.go:168] "Request Body" body=""
	I1003 18:14:05.185348   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:05.185711   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:05.685243   31648 type.go:168] "Request Body" body=""
	I1003 18:14:05.685313   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:05.685621   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:14:05.685670   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:14:06.185390   31648 type.go:168] "Request Body" body=""
	I1003 18:14:06.185454   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:06.185796   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:06.685338   31648 type.go:168] "Request Body" body=""
	I1003 18:14:06.685404   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:06.685744   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:07.185312   31648 type.go:168] "Request Body" body=""
	I1003 18:14:07.185375   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:07.185694   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:07.685319   31648 type.go:168] "Request Body" body=""
	I1003 18:14:07.685388   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:07.685720   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:14:07.685775   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:14:08.185299   31648 type.go:168] "Request Body" body=""
	I1003 18:14:08.185362   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:08.185681   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:08.685362   31648 type.go:168] "Request Body" body=""
	I1003 18:14:08.685501   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:08.686040   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:09.185088   31648 type.go:168] "Request Body" body=""
	I1003 18:14:09.185166   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:09.185492   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:09.685168   31648 type.go:168] "Request Body" body=""
	I1003 18:14:09.685230   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:09.685527   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:10.185203   31648 type.go:168] "Request Body" body=""
	I1003 18:14:10.185266   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:10.185584   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:14:10.185635   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:14:10.685306   31648 type.go:168] "Request Body" body=""
	I1003 18:14:10.685367   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:10.685706   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:11.185477   31648 type.go:168] "Request Body" body=""
	I1003 18:14:11.185545   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:11.185858   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:11.685629   31648 type.go:168] "Request Body" body=""
	I1003 18:14:11.685690   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:11.686017   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:12.185788   31648 type.go:168] "Request Body" body=""
	I1003 18:14:12.185850   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:12.186194   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:14:12.186261   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:14:12.685007   31648 type.go:168] "Request Body" body=""
	I1003 18:14:12.685075   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:12.685367   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:13.185078   31648 type.go:168] "Request Body" body=""
	I1003 18:14:13.185142   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:13.185434   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:13.685146   31648 type.go:168] "Request Body" body=""
	I1003 18:14:13.685215   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:13.685514   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:14.185200   31648 type.go:168] "Request Body" body=""
	I1003 18:14:14.185264   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:14.185577   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:14.685359   31648 type.go:168] "Request Body" body=""
	W1003 18:14:14.685420   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded
	I1003 18:14:14.685433   31648 node_ready.go:38] duration metric: took 6m0.000605507s for node "functional-889240" to be "Ready" ...
	I1003 18:14:14.688030   31648 out.go:203] 
	W1003 18:14:14.689379   31648 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1003 18:14:14.689402   31648 out.go:285] * 
	W1003 18:14:14.691089   31648 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:14:14.693118   31648 out.go:203] 
	
	
	==> CRI-O <==
	Oct 03 18:14:22 functional-889240 crio[2966]: time="2025-10-03T18:14:22.237702922Z" level=info msg="createCtr: deleting container ID 8f0a46b12b0e26714ac2e0e8a7775ef7fddcceb98c3fedd5e8fa2fc0cd9f33e4 from idIndex" id=57ef644e-6e3f-4ef2-a0e8-1504d4de41f3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:22 functional-889240 crio[2966]: time="2025-10-03T18:14:22.237732484Z" level=info msg="createCtr: removing container 8f0a46b12b0e26714ac2e0e8a7775ef7fddcceb98c3fedd5e8fa2fc0cd9f33e4" id=57ef644e-6e3f-4ef2-a0e8-1504d4de41f3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:22 functional-889240 crio[2966]: time="2025-10-03T18:14:22.237759668Z" level=info msg="createCtr: deleting container 8f0a46b12b0e26714ac2e0e8a7775ef7fddcceb98c3fedd5e8fa2fc0cd9f33e4 from storage" id=57ef644e-6e3f-4ef2-a0e8-1504d4de41f3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:22 functional-889240 crio[2966]: time="2025-10-03T18:14:22.239825268Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-889240_kube-system_7e715cb6024854d45a9fa99576167e43_0" id=57ef644e-6e3f-4ef2-a0e8-1504d4de41f3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:22 functional-889240 crio[2966]: time="2025-10-03T18:14:22.449242231Z" level=info msg="Checking image status: minikube-local-cache-test:functional-889240" id=e52f58d1-3fc6-4cc7-b825-5dbb07573c1c name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:22 functional-889240 crio[2966]: time="2025-10-03T18:14:22.471998502Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-889240" id=c09ab8d0-7cb3-454d-9d65-4383685bbfea name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:22 functional-889240 crio[2966]: time="2025-10-03T18:14:22.472148146Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-889240 not found" id=c09ab8d0-7cb3-454d-9d65-4383685bbfea name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:22 functional-889240 crio[2966]: time="2025-10-03T18:14:22.472198001Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-889240 found" id=c09ab8d0-7cb3-454d-9d65-4383685bbfea name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:22 functional-889240 crio[2966]: time="2025-10-03T18:14:22.494562801Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-889240" id=8a2a0114-8203-45ff-98e8-a749519d21d8 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:22 functional-889240 crio[2966]: time="2025-10-03T18:14:22.494735707Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-889240 not found" id=8a2a0114-8203-45ff-98e8-a749519d21d8 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:22 functional-889240 crio[2966]: time="2025-10-03T18:14:22.494788086Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-889240 found" id=8a2a0114-8203-45ff-98e8-a749519d21d8 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:23 functional-889240 crio[2966]: time="2025-10-03T18:14:23.24477742Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=d4926c34-d9cd-40a9-8d85-0e2b6d94942f name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:23 functional-889240 crio[2966]: time="2025-10-03T18:14:23.536910262Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=23e33af5-a773-4b1a-9ba2-3601b67d5486 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:23 functional-889240 crio[2966]: time="2025-10-03T18:14:23.537046025Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=23e33af5-a773-4b1a-9ba2-3601b67d5486 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:23 functional-889240 crio[2966]: time="2025-10-03T18:14:23.537074554Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=23e33af5-a773-4b1a-9ba2-3601b67d5486 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:23 functional-889240 crio[2966]: time="2025-10-03T18:14:23.955913267Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=c519c75a-1d33-41c1-bd4e-7a62d7d1392c name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:23 functional-889240 crio[2966]: time="2025-10-03T18:14:23.956061121Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=c519c75a-1d33-41c1-bd4e-7a62d7d1392c name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:23 functional-889240 crio[2966]: time="2025-10-03T18:14:23.956092646Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=c519c75a-1d33-41c1-bd4e-7a62d7d1392c name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:23 functional-889240 crio[2966]: time="2025-10-03T18:14:23.978772267Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=e0c94d54-2133-40bc-8659-e30355633a00 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:23 functional-889240 crio[2966]: time="2025-10-03T18:14:23.978911816Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=e0c94d54-2133-40bc-8659-e30355633a00 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:23 functional-889240 crio[2966]: time="2025-10-03T18:14:23.978961101Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=e0c94d54-2133-40bc-8659-e30355633a00 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:24 functional-889240 crio[2966]: time="2025-10-03T18:14:24.013812991Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=4313db7f-572c-47d0-94d3-ee8c9b922da7 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:24 functional-889240 crio[2966]: time="2025-10-03T18:14:24.013949756Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=4313db7f-572c-47d0-94d3-ee8c9b922da7 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:24 functional-889240 crio[2966]: time="2025-10-03T18:14:24.014010006Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=4313db7f-572c-47d0-94d3-ee8c9b922da7 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:24 functional-889240 crio[2966]: time="2025-10-03T18:14:24.454255502Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=7c86583e-9529-491c-ad99-0b6b49fd0710 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:14:25.829768    5340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:14:25.830307    5340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:14:25.831814    5340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:14:25.832251    5340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:14:25.833727    5340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 3 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001870] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.374530] i8042: Warning: Keylock active
	[  +0.010846] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003424] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000658] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000637] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000691] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.479345] block sda: the capability attribute has been deprecated.
	[  +0.086934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025583] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.992810] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:14:25 up 56 min,  0 user,  load average: 0.12, 0.03, 0.04
	Linux functional-889240 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 03 18:14:20 functional-889240 kubelet[1817]:  > podSandboxID="f7a911a7beb9273e68dd3941cf8e91314c9e072bbe2986f9740324a1866bc050"
	Oct 03 18:14:20 functional-889240 kubelet[1817]: E1003 18:14:20.237057    1817 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:14:20 functional-889240 kubelet[1817]:         container etcd start failed in pod etcd-functional-889240_kube-system(a73daf0147d5280c6db538ca59db9fe0): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:14:20 functional-889240 kubelet[1817]:  > logger="UnhandledError"
	Oct 03 18:14:20 functional-889240 kubelet[1817]: E1003 18:14:20.237086    1817 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-889240" podUID="a73daf0147d5280c6db538ca59db9fe0"
	Oct 03 18:14:21 functional-889240 kubelet[1817]: E1003 18:14:21.212113    1817 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-889240\" not found" node="functional-889240"
	Oct 03 18:14:21 functional-889240 kubelet[1817]: E1003 18:14:21.237836    1817 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:14:21 functional-889240 kubelet[1817]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:14:21 functional-889240 kubelet[1817]:  > podSandboxID="bb5ee21569299932af0968d7ca6c3e44bd5f6c5d7c8e5900d54800ccc90ccf96"
	Oct 03 18:14:21 functional-889240 kubelet[1817]: E1003 18:14:21.237934    1817 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:14:21 functional-889240 kubelet[1817]:         container kube-apiserver start failed in pod kube-apiserver-functional-889240_kube-system(c6bcf20a60b81dff297fc63f5b978297): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:14:21 functional-889240 kubelet[1817]:  > logger="UnhandledError"
	Oct 03 18:14:21 functional-889240 kubelet[1817]: E1003 18:14:21.237961    1817 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-889240" podUID="c6bcf20a60b81dff297fc63f5b978297"
	Oct 03 18:14:21 functional-889240 kubelet[1817]: E1003 18:14:21.498941    1817 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-889240.186b0d404ae58a04\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-889240.186b0d404ae58a04  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-889240,UID:functional-889240,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-889240 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-889240,},FirstTimestamp:2025-10-03 18:04:09.203935748 +0000 UTC m=+0.376858749,LastTimestamp:2025-10-03 18:04:09.206706066 +0000 UTC m=+0.379629064,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,Reporting
Instance:functional-889240,}"
	Oct 03 18:14:22 functional-889240 kubelet[1817]: E1003 18:14:22.212210    1817 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-889240\" not found" node="functional-889240"
	Oct 03 18:14:22 functional-889240 kubelet[1817]: E1003 18:14:22.240119    1817 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:14:22 functional-889240 kubelet[1817]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:14:22 functional-889240 kubelet[1817]:  > podSandboxID="65835069a3bb03e380bb50149082d0338f4c2642bf6aea8dacf1e0715b6f21c8"
	Oct 03 18:14:22 functional-889240 kubelet[1817]: E1003 18:14:22.240225    1817 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:14:22 functional-889240 kubelet[1817]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-889240_kube-system(7e715cb6024854d45a9fa99576167e43): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:14:22 functional-889240 kubelet[1817]:  > logger="UnhandledError"
	Oct 03 18:14:22 functional-889240 kubelet[1817]: E1003 18:14:22.240257    1817 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-889240" podUID="7e715cb6024854d45a9fa99576167e43"
	Oct 03 18:14:23 functional-889240 kubelet[1817]: E1003 18:14:23.891877    1817 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-889240?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 03 18:14:24 functional-889240 kubelet[1817]: I1003 18:14:24.090864    1817 kubelet_node_status.go:75] "Attempting to register node" node="functional-889240"
	Oct 03 18:14:24 functional-889240 kubelet[1817]: E1003 18:14:24.091265    1817 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-889240"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-889240 -n functional-889240
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-889240 -n functional-889240: exit status 2 (316.014287ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-889240" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (2.06s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (2.03s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-889240 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-889240 get pods: exit status 1 (110.655112ms)

                                                
                                                
** stderr ** 
	E1003 18:14:26.742048   37600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:14:26.742477   37600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:14:26.743948   37600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:14:26.744329   37600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:14:26.745737   37600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-889240 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-889240
helpers_test.go:243: (dbg) docker inspect functional-889240:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a",
	        "Created": "2025-10-03T17:59:56.619817507Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 26766,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T17:59:56.652603806Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/hostname",
	        "HostsPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/hosts",
	        "LogPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a-json.log",
	        "Name": "/functional-889240",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-889240:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-889240",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a",
	                "LowerDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2-init/diff:/var/lib/docker/overlay2/6a517a7375440eba803d7b83fe1e0821915758396dd4d8556ab64fff322a60c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-889240",
	                "Source": "/var/lib/docker/volumes/functional-889240/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-889240",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-889240",
	                "name.minikube.sigs.k8s.io": "functional-889240",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "da15d31dc23bdd4694ae9e3b61015d7ce0d61668c73d3e386422834c6f0321d8",
	            "SandboxKey": "/var/run/docker/netns/da15d31dc23b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-889240": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:9e:1d:e9:d9:ce",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "03281bed183d0817c0bc237b5c25093fc10222138aedde4c7deef5823759fa24",
	                    "EndpointID": "28fa584fdd6e253816ae08a2460ef02b91085c8a7996d55008876e3bd65bbc7e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-889240",
	                        "9f4f0f10b4a9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-889240 -n functional-889240
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-889240 -n functional-889240: exit status 2 (289.133416ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 logs -n 25
helpers_test.go:260: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ pause   │ nospam-093146 --log_dir /tmp/nospam-093146 pause                                                              │ nospam-093146     │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ unpause │ nospam-093146 --log_dir /tmp/nospam-093146 unpause                                                            │ nospam-093146     │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ unpause │ nospam-093146 --log_dir /tmp/nospam-093146 unpause                                                            │ nospam-093146     │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ unpause │ nospam-093146 --log_dir /tmp/nospam-093146 unpause                                                            │ nospam-093146     │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ stop    │ nospam-093146 --log_dir /tmp/nospam-093146 stop                                                               │ nospam-093146     │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ stop    │ nospam-093146 --log_dir /tmp/nospam-093146 stop                                                               │ nospam-093146     │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ stop    │ nospam-093146 --log_dir /tmp/nospam-093146 stop                                                               │ nospam-093146     │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ delete  │ -p nospam-093146                                                                                              │ nospam-093146     │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ start   │ -p functional-889240 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │                     │
	│ start   │ -p functional-889240 --alsologtostderr -v=8                                                                   │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:08 UTC │                     │
	│ cache   │ functional-889240 cache add registry.k8s.io/pause:3.1                                                         │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ cache   │ functional-889240 cache add registry.k8s.io/pause:3.3                                                         │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ cache   │ functional-889240 cache add registry.k8s.io/pause:latest                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ cache   │ functional-889240 cache add minikube-local-cache-test:functional-889240                                       │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ cache   │ functional-889240 cache delete minikube-local-cache-test:functional-889240                                    │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ ssh     │ functional-889240 ssh sudo crictl images                                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ ssh     │ functional-889240 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ ssh     │ functional-889240 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │                     │
	│ cache   │ functional-889240 cache reload                                                                                │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ ssh     │ functional-889240 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ kubectl │ functional-889240 kubectl -- --context functional-889240 get pods                                             │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:08:11
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:08:11.068231   31648 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:08:11.068486   31648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:08:11.068496   31648 out.go:374] Setting ErrFile to fd 2...
	I1003 18:08:11.068502   31648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:08:11.068729   31648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:08:11.069215   31648 out.go:368] Setting JSON to false
	I1003 18:08:11.070085   31648 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3042,"bootTime":1759511849,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:08:11.070168   31648 start.go:140] virtualization: kvm guest
	I1003 18:08:11.073397   31648 out.go:179] * [functional-889240] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 18:08:11.074567   31648 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:08:11.074571   31648 notify.go:220] Checking for updates...
	I1003 18:08:11.077123   31648 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:08:11.078380   31648 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:08:11.079542   31648 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:08:11.080665   31648 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:08:11.081754   31648 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:08:11.083246   31648 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:08:11.083337   31648 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:08:11.109195   31648 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:08:11.109276   31648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:08:11.161161   31648 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-03 18:08:11.151693527 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:08:11.161260   31648 docker.go:318] overlay module found
	I1003 18:08:11.162933   31648 out.go:179] * Using the docker driver based on existing profile
	I1003 18:08:11.164103   31648 start.go:304] selected driver: docker
	I1003 18:08:11.164115   31648 start.go:924] validating driver "docker" against &{Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:08:11.164183   31648 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:08:11.164266   31648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:08:11.217384   31648 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-03 18:08:11.207171248 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:08:11.218094   31648 cni.go:84] Creating CNI manager for ""
	I1003 18:08:11.218156   31648 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 18:08:11.218200   31648 start.go:348] cluster config:
	{Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:08:11.220110   31648 out.go:179] * Starting "functional-889240" primary control-plane node in "functional-889240" cluster
	I1003 18:08:11.221257   31648 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:08:11.222336   31648 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:08:11.223595   31648 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:08:11.223644   31648 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 18:08:11.223654   31648 cache.go:58] Caching tarball of preloaded images
	I1003 18:08:11.223686   31648 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:08:11.223758   31648 preload.go:233] Found /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1003 18:08:11.223772   31648 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 18:08:11.223859   31648 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/config.json ...
	I1003 18:08:11.242913   31648 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 18:08:11.242930   31648 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 18:08:11.242946   31648 cache.go:232] Successfully downloaded all kic artifacts
	I1003 18:08:11.242988   31648 start.go:360] acquireMachinesLock for functional-889240: {Name:mk6750a9fb1c1c3747b0abf2aebe2a2d0047ae3a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:08:11.243063   31648 start.go:364] duration metric: took 50.516µs to acquireMachinesLock for "functional-889240"
	I1003 18:08:11.243090   31648 start.go:96] Skipping create...Using existing machine configuration
	I1003 18:08:11.243097   31648 fix.go:54] fixHost starting: 
	I1003 18:08:11.243298   31648 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
	I1003 18:08:11.259925   31648 fix.go:112] recreateIfNeeded on functional-889240: state=Running err=<nil>
	W1003 18:08:11.259951   31648 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 18:08:11.261699   31648 out.go:252] * Updating the running docker "functional-889240" container ...
	I1003 18:08:11.261731   31648 machine.go:93] provisionDockerMachine start ...
	I1003 18:08:11.261806   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:11.278828   31648 main.go:141] libmachine: Using SSH client type: native
	I1003 18:08:11.279109   31648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:08:11.279121   31648 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 18:08:11.421621   31648 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-889240
	
	I1003 18:08:11.421642   31648 ubuntu.go:182] provisioning hostname "functional-889240"
	I1003 18:08:11.421693   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:11.439154   31648 main.go:141] libmachine: Using SSH client type: native
	I1003 18:08:11.439372   31648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:08:11.439384   31648 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-889240 && echo "functional-889240" | sudo tee /etc/hostname
	I1003 18:08:11.590164   31648 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-889240
	
	I1003 18:08:11.590238   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:11.607612   31648 main.go:141] libmachine: Using SSH client type: native
	I1003 18:08:11.607822   31648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:08:11.607839   31648 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-889240' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-889240/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-889240' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:08:11.750385   31648 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:08:11.750412   31648 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8669/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8669/.minikube}
	I1003 18:08:11.750443   31648 ubuntu.go:190] setting up certificates
	I1003 18:08:11.750454   31648 provision.go:84] configureAuth start
	I1003 18:08:11.750512   31648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-889240
	I1003 18:08:11.767416   31648 provision.go:143] copyHostCerts
	I1003 18:08:11.767453   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:08:11.767484   31648 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem, removing ...
	I1003 18:08:11.767498   31648 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:08:11.767564   31648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem (1082 bytes)
	I1003 18:08:11.767659   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:08:11.767679   31648 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem, removing ...
	I1003 18:08:11.767686   31648 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:08:11.767714   31648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem (1123 bytes)
	I1003 18:08:11.767934   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:08:11.768183   31648 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem, removing ...
	I1003 18:08:11.768200   31648 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:08:11.768251   31648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem (1675 bytes)
	I1003 18:08:11.768350   31648 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem org=jenkins.functional-889240 san=[127.0.0.1 192.168.49.2 functional-889240 localhost minikube]
	I1003 18:08:11.920440   31648 provision.go:177] copyRemoteCerts
	I1003 18:08:11.920514   31648 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:08:11.920551   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:11.938061   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:12.037875   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 18:08:12.037937   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1003 18:08:12.054720   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 18:08:12.054773   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 18:08:12.071055   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 18:08:12.071110   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 18:08:12.087547   31648 provision.go:87] duration metric: took 337.079976ms to configureAuth
	I1003 18:08:12.087574   31648 ubuntu.go:206] setting minikube options for container-runtime
	I1003 18:08:12.087766   31648 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:08:12.087867   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:12.105048   31648 main.go:141] libmachine: Using SSH client type: native
	I1003 18:08:12.105289   31648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:08:12.105305   31648 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 18:08:12.366340   31648 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 18:08:12.366367   31648 machine.go:96] duration metric: took 1.104629442s to provisionDockerMachine
	I1003 18:08:12.366377   31648 start.go:293] postStartSetup for "functional-889240" (driver="docker")
	I1003 18:08:12.366388   31648 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:08:12.366431   31648 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:08:12.366476   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:12.383468   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:12.483988   31648 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:08:12.487264   31648 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1003 18:08:12.487282   31648 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1003 18:08:12.487289   31648 command_runner.go:130] > VERSION_ID="12"
	I1003 18:08:12.487295   31648 command_runner.go:130] > VERSION="12 (bookworm)"
	I1003 18:08:12.487301   31648 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1003 18:08:12.487306   31648 command_runner.go:130] > ID=debian
	I1003 18:08:12.487313   31648 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1003 18:08:12.487320   31648 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1003 18:08:12.487329   31648 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1003 18:08:12.487402   31648 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:08:12.487425   31648 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 18:08:12.487438   31648 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/addons for local assets ...
	I1003 18:08:12.487491   31648 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/files for local assets ...
	I1003 18:08:12.487581   31648 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> 122122.pem in /etc/ssl/certs
	I1003 18:08:12.487593   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /etc/ssl/certs/122122.pem
	I1003 18:08:12.487688   31648 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/test/nested/copy/12212/hosts -> hosts in /etc/test/nested/copy/12212
	I1003 18:08:12.487697   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/test/nested/copy/12212/hosts -> /etc/test/nested/copy/12212/hosts
	I1003 18:08:12.487740   31648 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/12212
	I1003 18:08:12.495127   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:08:12.511597   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/test/nested/copy/12212/hosts --> /etc/test/nested/copy/12212/hosts (40 bytes)
	I1003 18:08:12.528571   31648 start.go:296] duration metric: took 162.180752ms for postStartSetup
	I1003 18:08:12.528647   31648 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:08:12.528710   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:12.546258   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:12.643641   31648 command_runner.go:130] > 39%
	I1003 18:08:12.643858   31648 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:08:12.648017   31648 command_runner.go:130] > 179G
	I1003 18:08:12.648284   31648 fix.go:56] duration metric: took 1.405183874s for fixHost
	I1003 18:08:12.648303   31648 start.go:83] releasing machines lock for "functional-889240", held for 1.405223544s
	I1003 18:08:12.648364   31648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-889240
	I1003 18:08:12.665548   31648 ssh_runner.go:195] Run: cat /version.json
	I1003 18:08:12.665589   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:12.665627   31648 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:08:12.665684   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:12.683771   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:12.684037   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:12.833728   31648 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1003 18:08:12.833784   31648 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759382731-21643", "minikube_version": "v1.37.0", "commit": "b0c70dd4d342e6443a02916e52d246d8cdb181c4"}
	I1003 18:08:12.833903   31648 ssh_runner.go:195] Run: systemctl --version
	I1003 18:08:12.840008   31648 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1003 18:08:12.840056   31648 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1003 18:08:12.840282   31648 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 18:08:12.874135   31648 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1003 18:08:12.878285   31648 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1003 18:08:12.878575   31648 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 18:08:12.878637   31648 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 18:08:12.886227   31648 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1003 18:08:12.886250   31648 start.go:495] detecting cgroup driver to use...
	I1003 18:08:12.886282   31648 detect.go:190] detected "systemd" cgroup driver on host os
	I1003 18:08:12.886327   31648 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 18:08:12.900106   31648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 18:08:12.911429   31648 docker.go:218] disabling cri-docker service (if available) ...
	I1003 18:08:12.911477   31648 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 18:08:12.925289   31648 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 18:08:12.936739   31648 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 18:08:13.020667   31648 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 18:08:13.102263   31648 docker.go:234] disabling docker service ...
	I1003 18:08:13.102328   31648 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 18:08:13.115759   31648 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 18:08:13.127581   31648 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 18:08:13.208801   31648 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 18:08:13.298232   31648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 18:08:13.314511   31648 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:08:13.327949   31648 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1003 18:08:13.328859   31648 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 18:08:13.328914   31648 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.337658   31648 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 18:08:13.337709   31648 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.346162   31648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.354712   31648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.363098   31648 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:08:13.370793   31648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.378940   31648 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.386700   31648 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.394938   31648 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:08:13.401467   31648 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1003 18:08:13.402164   31648 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:08:13.409040   31648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:08:13.496423   31648 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 18:08:13.599891   31648 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 18:08:13.599956   31648 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 18:08:13.603739   31648 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1003 18:08:13.603760   31648 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1003 18:08:13.603769   31648 command_runner.go:130] > Device: 0,59	Inode: 3868        Links: 1
	I1003 18:08:13.603779   31648 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1003 18:08:13.603787   31648 command_runner.go:130] > Access: 2025-10-03 18:08:13.582699245 +0000
	I1003 18:08:13.603796   31648 command_runner.go:130] > Modify: 2025-10-03 18:08:13.582699245 +0000
	I1003 18:08:13.603806   31648 command_runner.go:130] > Change: 2025-10-03 18:08:13.582699245 +0000
	I1003 18:08:13.603811   31648 command_runner.go:130] >  Birth: 2025-10-03 18:08:13.582699245 +0000
	I1003 18:08:13.603837   31648 start.go:563] Will wait 60s for crictl version
	I1003 18:08:13.603884   31648 ssh_runner.go:195] Run: which crictl
	I1003 18:08:13.607403   31648 command_runner.go:130] > /usr/local/bin/crictl
	I1003 18:08:13.607458   31648 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 18:08:13.630641   31648 command_runner.go:130] > Version:  0.1.0
	I1003 18:08:13.630667   31648 command_runner.go:130] > RuntimeName:  cri-o
	I1003 18:08:13.630673   31648 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1003 18:08:13.630680   31648 command_runner.go:130] > RuntimeApiVersion:  v1
	I1003 18:08:13.630699   31648 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 18:08:13.630764   31648 ssh_runner.go:195] Run: crio --version
	I1003 18:08:13.656303   31648 command_runner.go:130] > crio version 1.34.1
	I1003 18:08:13.656324   31648 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1003 18:08:13.656329   31648 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1003 18:08:13.656339   31648 command_runner.go:130] >    GitTreeState:   dirty
	I1003 18:08:13.656344   31648 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1003 18:08:13.656348   31648 command_runner.go:130] >    GoVersion:      go1.24.6
	I1003 18:08:13.656352   31648 command_runner.go:130] >    Compiler:       gc
	I1003 18:08:13.656365   31648 command_runner.go:130] >    Platform:       linux/amd64
	I1003 18:08:13.656372   31648 command_runner.go:130] >    Linkmode:       static
	I1003 18:08:13.656378   31648 command_runner.go:130] >    BuildTags:
	I1003 18:08:13.656383   31648 command_runner.go:130] >      static
	I1003 18:08:13.656387   31648 command_runner.go:130] >      netgo
	I1003 18:08:13.656393   31648 command_runner.go:130] >      osusergo
	I1003 18:08:13.656396   31648 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1003 18:08:13.656402   31648 command_runner.go:130] >      seccomp
	I1003 18:08:13.656405   31648 command_runner.go:130] >      apparmor
	I1003 18:08:13.656410   31648 command_runner.go:130] >      selinux
	I1003 18:08:13.656415   31648 command_runner.go:130] >    LDFlags:          unknown
	I1003 18:08:13.656421   31648 command_runner.go:130] >    SeccompEnabled:   true
	I1003 18:08:13.656426   31648 command_runner.go:130] >    AppArmorEnabled:  false
	I1003 18:08:13.657588   31648 ssh_runner.go:195] Run: crio --version
	I1003 18:08:13.682656   31648 command_runner.go:130] > crio version 1.34.1
	I1003 18:08:13.682693   31648 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1003 18:08:13.682698   31648 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1003 18:08:13.682703   31648 command_runner.go:130] >    GitTreeState:   dirty
	I1003 18:08:13.682708   31648 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1003 18:08:13.682712   31648 command_runner.go:130] >    GoVersion:      go1.24.6
	I1003 18:08:13.682716   31648 command_runner.go:130] >    Compiler:       gc
	I1003 18:08:13.682720   31648 command_runner.go:130] >    Platform:       linux/amd64
	I1003 18:08:13.682724   31648 command_runner.go:130] >    Linkmode:       static
	I1003 18:08:13.682728   31648 command_runner.go:130] >    BuildTags:
	I1003 18:08:13.682733   31648 command_runner.go:130] >      static
	I1003 18:08:13.682737   31648 command_runner.go:130] >      netgo
	I1003 18:08:13.682741   31648 command_runner.go:130] >      osusergo
	I1003 18:08:13.682746   31648 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1003 18:08:13.682753   31648 command_runner.go:130] >      seccomp
	I1003 18:08:13.682756   31648 command_runner.go:130] >      apparmor
	I1003 18:08:13.682759   31648 command_runner.go:130] >      selinux
	I1003 18:08:13.682763   31648 command_runner.go:130] >    LDFlags:          unknown
	I1003 18:08:13.682770   31648 command_runner.go:130] >    SeccompEnabled:   true
	I1003 18:08:13.682774   31648 command_runner.go:130] >    AppArmorEnabled:  false
	I1003 18:08:13.685817   31648 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 18:08:13.686852   31648 cli_runner.go:164] Run: docker network inspect functional-889240 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:08:13.703291   31648 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 18:08:13.707207   31648 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1003 18:08:13.707295   31648 kubeadm.go:883] updating cluster {Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 18:08:13.707417   31648 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:08:13.707473   31648 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:08:13.737725   31648 command_runner.go:130] > {
	I1003 18:08:13.737745   31648 command_runner.go:130] >   "images":  [
	I1003 18:08:13.737749   31648 command_runner.go:130] >     {
	I1003 18:08:13.737755   31648 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1003 18:08:13.737763   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.737773   31648 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1003 18:08:13.737780   31648 command_runner.go:130] >       ],
	I1003 18:08:13.737786   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.737798   31648 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1003 18:08:13.737807   31648 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1003 18:08:13.737811   31648 command_runner.go:130] >       ],
	I1003 18:08:13.737815   31648 command_runner.go:130] >       "size":  "109379124",
	I1003 18:08:13.737819   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.737828   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.737832   31648 command_runner.go:130] >     },
	I1003 18:08:13.737835   31648 command_runner.go:130] >     {
	I1003 18:08:13.737841   31648 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1003 18:08:13.737848   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.737859   31648 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1003 18:08:13.737868   31648 command_runner.go:130] >       ],
	I1003 18:08:13.737875   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.737886   31648 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1003 18:08:13.737898   31648 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1003 18:08:13.737904   31648 command_runner.go:130] >       ],
	I1003 18:08:13.737908   31648 command_runner.go:130] >       "size":  "31470524",
	I1003 18:08:13.737914   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.737920   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.737931   31648 command_runner.go:130] >     },
	I1003 18:08:13.737939   31648 command_runner.go:130] >     {
	I1003 18:08:13.737948   31648 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1003 18:08:13.737958   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.737969   31648 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1003 18:08:13.737987   31648 command_runner.go:130] >       ],
	I1003 18:08:13.737995   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738007   31648 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1003 18:08:13.738023   31648 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1003 18:08:13.738031   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738037   31648 command_runner.go:130] >       "size":  "76103547",
	I1003 18:08:13.738045   31648 command_runner.go:130] >       "username":  "nonroot",
	I1003 18:08:13.738049   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.738054   31648 command_runner.go:130] >     },
	I1003 18:08:13.738058   31648 command_runner.go:130] >     {
	I1003 18:08:13.738070   31648 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1003 18:08:13.738081   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.738091   31648 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1003 18:08:13.738100   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738110   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738124   31648 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1003 18:08:13.738137   31648 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1003 18:08:13.738143   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738148   31648 command_runner.go:130] >       "size":  "195976448",
	I1003 18:08:13.738155   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.738165   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.738175   31648 command_runner.go:130] >       },
	I1003 18:08:13.738187   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.738197   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.738205   31648 command_runner.go:130] >     },
	I1003 18:08:13.738212   31648 command_runner.go:130] >     {
	I1003 18:08:13.738223   31648 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1003 18:08:13.738230   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.738236   31648 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1003 18:08:13.738245   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738256   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738270   31648 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1003 18:08:13.738285   31648 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1003 18:08:13.738293   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738301   31648 command_runner.go:130] >       "size":  "89046001",
	I1003 18:08:13.738308   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.738312   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.738315   31648 command_runner.go:130] >       },
	I1003 18:08:13.738320   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.738329   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.738338   31648 command_runner.go:130] >     },
	I1003 18:08:13.738344   31648 command_runner.go:130] >     {
	I1003 18:08:13.738357   31648 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1003 18:08:13.738366   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.738377   31648 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1003 18:08:13.738386   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738395   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738402   31648 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1003 18:08:13.738418   31648 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1003 18:08:13.738427   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738434   31648 command_runner.go:130] >       "size":  "76004181",
	I1003 18:08:13.738443   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.738453   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.738460   31648 command_runner.go:130] >       },
	I1003 18:08:13.738467   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.738475   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.738480   31648 command_runner.go:130] >     },
	I1003 18:08:13.738484   31648 command_runner.go:130] >     {
	I1003 18:08:13.738493   31648 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1003 18:08:13.738502   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.738514   31648 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1003 18:08:13.738522   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738531   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738545   31648 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1003 18:08:13.738560   31648 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1003 18:08:13.738568   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738572   31648 command_runner.go:130] >       "size":  "73138073",
	I1003 18:08:13.738580   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.738586   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.738595   31648 command_runner.go:130] >     },
	I1003 18:08:13.738605   31648 command_runner.go:130] >     {
	I1003 18:08:13.738617   31648 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1003 18:08:13.738625   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.738634   31648 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1003 18:08:13.738642   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738648   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738658   31648 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1003 18:08:13.738674   31648 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1003 18:08:13.738683   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738693   31648 command_runner.go:130] >       "size":  "53844823",
	I1003 18:08:13.738702   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.738710   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.738718   31648 command_runner.go:130] >       },
	I1003 18:08:13.738724   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.738733   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.738743   31648 command_runner.go:130] >     },
	I1003 18:08:13.738747   31648 command_runner.go:130] >     {
	I1003 18:08:13.738756   31648 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1003 18:08:13.738766   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.738777   31648 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1003 18:08:13.738785   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738792   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.738806   31648 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1003 18:08:13.738819   31648 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1003 18:08:13.738827   31648 command_runner.go:130] >       ],
	I1003 18:08:13.738832   31648 command_runner.go:130] >       "size":  "742092",
	I1003 18:08:13.738838   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.738843   31648 command_runner.go:130] >         "value":  "65535"
	I1003 18:08:13.738851   31648 command_runner.go:130] >       },
	I1003 18:08:13.738862   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.738871   31648 command_runner.go:130] >       "pinned":  true
	I1003 18:08:13.738885   31648 command_runner.go:130] >     }
	I1003 18:08:13.738890   31648 command_runner.go:130] >   ]
	I1003 18:08:13.738898   31648 command_runner.go:130] > }
	I1003 18:08:13.739109   31648 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:08:13.739126   31648 crio.go:433] Images already preloaded, skipping extraction
	I1003 18:08:13.739173   31648 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:08:13.761526   31648 command_runner.go:130] > {
	I1003 18:08:13.761550   31648 command_runner.go:130] >   "images":  [
	I1003 18:08:13.761558   31648 command_runner.go:130] >     {
	I1003 18:08:13.761569   31648 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1003 18:08:13.761577   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.761586   31648 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1003 18:08:13.761592   31648 command_runner.go:130] >       ],
	I1003 18:08:13.761599   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.761616   31648 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1003 18:08:13.761631   31648 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1003 18:08:13.761639   31648 command_runner.go:130] >       ],
	I1003 18:08:13.761646   31648 command_runner.go:130] >       "size":  "109379124",
	I1003 18:08:13.761659   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.761672   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.761681   31648 command_runner.go:130] >     },
	I1003 18:08:13.761686   31648 command_runner.go:130] >     {
	I1003 18:08:13.761698   31648 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1003 18:08:13.761708   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.761719   31648 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1003 18:08:13.761728   31648 command_runner.go:130] >       ],
	I1003 18:08:13.761737   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.761753   31648 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1003 18:08:13.761770   31648 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1003 18:08:13.761779   31648 command_runner.go:130] >       ],
	I1003 18:08:13.761789   31648 command_runner.go:130] >       "size":  "31470524",
	I1003 18:08:13.761799   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.761810   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.761818   31648 command_runner.go:130] >     },
	I1003 18:08:13.761823   31648 command_runner.go:130] >     {
	I1003 18:08:13.761836   31648 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1003 18:08:13.761845   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.761852   31648 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1003 18:08:13.761860   31648 command_runner.go:130] >       ],
	I1003 18:08:13.761866   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.761879   31648 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1003 18:08:13.761889   31648 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1003 18:08:13.761897   31648 command_runner.go:130] >       ],
	I1003 18:08:13.761903   31648 command_runner.go:130] >       "size":  "76103547",
	I1003 18:08:13.761913   31648 command_runner.go:130] >       "username":  "nonroot",
	I1003 18:08:13.761922   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.761934   31648 command_runner.go:130] >     },
	I1003 18:08:13.761942   31648 command_runner.go:130] >     {
	I1003 18:08:13.761952   31648 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1003 18:08:13.761960   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.761970   31648 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1003 18:08:13.762000   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762008   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.762019   31648 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1003 18:08:13.762032   31648 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1003 18:08:13.762041   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762051   31648 command_runner.go:130] >       "size":  "195976448",
	I1003 18:08:13.762060   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.762068   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.762074   31648 command_runner.go:130] >       },
	I1003 18:08:13.762087   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.762097   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.762101   31648 command_runner.go:130] >     },
	I1003 18:08:13.762109   31648 command_runner.go:130] >     {
	I1003 18:08:13.762117   31648 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1003 18:08:13.762126   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.762135   31648 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1003 18:08:13.762143   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762149   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.762163   31648 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1003 18:08:13.762178   31648 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1003 18:08:13.762186   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762193   31648 command_runner.go:130] >       "size":  "89046001",
	I1003 18:08:13.762202   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.762212   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.762221   31648 command_runner.go:130] >       },
	I1003 18:08:13.762229   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.762239   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.762248   31648 command_runner.go:130] >     },
	I1003 18:08:13.762256   31648 command_runner.go:130] >     {
	I1003 18:08:13.762265   31648 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1003 18:08:13.762275   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.762284   31648 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1003 18:08:13.762292   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762303   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.762319   31648 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1003 18:08:13.762335   31648 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1003 18:08:13.762343   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762353   31648 command_runner.go:130] >       "size":  "76004181",
	I1003 18:08:13.762361   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.762367   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.762374   31648 command_runner.go:130] >       },
	I1003 18:08:13.762380   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.762388   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.762392   31648 command_runner.go:130] >     },
	I1003 18:08:13.762401   31648 command_runner.go:130] >     {
	I1003 18:08:13.762412   31648 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1003 18:08:13.762422   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.762431   31648 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1003 18:08:13.762438   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762444   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.762456   31648 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1003 18:08:13.762468   31648 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1003 18:08:13.762477   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762487   31648 command_runner.go:130] >       "size":  "73138073",
	I1003 18:08:13.762497   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.762506   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.762515   31648 command_runner.go:130] >     },
	I1003 18:08:13.762523   31648 command_runner.go:130] >     {
	I1003 18:08:13.762533   31648 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1003 18:08:13.762539   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.762547   31648 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1003 18:08:13.762552   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762559   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.762570   31648 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1003 18:08:13.762593   31648 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1003 18:08:13.762602   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762608   31648 command_runner.go:130] >       "size":  "53844823",
	I1003 18:08:13.762616   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.762623   31648 command_runner.go:130] >         "value":  "0"
	I1003 18:08:13.762630   31648 command_runner.go:130] >       },
	I1003 18:08:13.762636   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.762645   31648 command_runner.go:130] >       "pinned":  false
	I1003 18:08:13.762653   31648 command_runner.go:130] >     },
	I1003 18:08:13.762657   31648 command_runner.go:130] >     {
	I1003 18:08:13.762665   31648 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1003 18:08:13.762671   31648 command_runner.go:130] >       "repoTags":  [
	I1003 18:08:13.762681   31648 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1003 18:08:13.762686   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762695   31648 command_runner.go:130] >       "repoDigests":  [
	I1003 18:08:13.762706   31648 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1003 18:08:13.762720   31648 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1003 18:08:13.762728   31648 command_runner.go:130] >       ],
	I1003 18:08:13.762732   31648 command_runner.go:130] >       "size":  "742092",
	I1003 18:08:13.762737   31648 command_runner.go:130] >       "uid":  {
	I1003 18:08:13.762742   31648 command_runner.go:130] >         "value":  "65535"
	I1003 18:08:13.762747   31648 command_runner.go:130] >       },
	I1003 18:08:13.762751   31648 command_runner.go:130] >       "username":  "",
	I1003 18:08:13.762757   31648 command_runner.go:130] >       "pinned":  true
	I1003 18:08:13.762761   31648 command_runner.go:130] >     }
	I1003 18:08:13.762766   31648 command_runner.go:130] >   ]
	I1003 18:08:13.762769   31648 command_runner.go:130] > }
	I1003 18:08:13.763568   31648 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:08:13.763587   31648 cache_images.go:85] Images are preloaded, skipping loading
	I1003 18:08:13.763596   31648 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1003 18:08:13.763703   31648 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-889240 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 18:08:13.763779   31648 ssh_runner.go:195] Run: crio config
	I1003 18:08:13.802487   31648 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1003 18:08:13.802512   31648 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1003 18:08:13.802523   31648 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1003 18:08:13.802528   31648 command_runner.go:130] > #
	I1003 18:08:13.802538   31648 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1003 18:08:13.802546   31648 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1003 18:08:13.802555   31648 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1003 18:08:13.802566   31648 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1003 18:08:13.802572   31648 command_runner.go:130] > # reload'.
	I1003 18:08:13.802583   31648 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1003 18:08:13.802595   31648 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1003 18:08:13.802606   31648 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1003 18:08:13.802615   31648 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1003 18:08:13.802622   31648 command_runner.go:130] > [crio]
	I1003 18:08:13.802632   31648 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1003 18:08:13.802640   31648 command_runner.go:130] > # containers images, in this directory.
	I1003 18:08:13.802653   31648 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1003 18:08:13.802671   31648 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1003 18:08:13.802680   31648 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1003 18:08:13.802693   31648 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1003 18:08:13.802704   31648 command_runner.go:130] > # imagestore = ""
	I1003 18:08:13.802714   31648 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1003 18:08:13.802726   31648 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1003 18:08:13.802736   31648 command_runner.go:130] > # storage_driver = "overlay"
	I1003 18:08:13.802747   31648 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1003 18:08:13.802761   31648 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1003 18:08:13.802770   31648 command_runner.go:130] > # storage_option = [
	I1003 18:08:13.802777   31648 command_runner.go:130] > # ]
	I1003 18:08:13.802788   31648 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1003 18:08:13.802800   31648 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1003 18:08:13.802808   31648 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1003 18:08:13.802820   31648 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1003 18:08:13.802830   31648 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1003 18:08:13.802835   31648 command_runner.go:130] > # always happen on a node reboot
	I1003 18:08:13.802840   31648 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1003 18:08:13.802849   31648 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1003 18:08:13.802860   31648 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1003 18:08:13.802865   31648 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1003 18:08:13.802871   31648 command_runner.go:130] > # version_file_persist = ""
	I1003 18:08:13.802882   31648 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1003 18:08:13.802899   31648 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1003 18:08:13.802906   31648 command_runner.go:130] > # internal_wipe = true
	I1003 18:08:13.802917   31648 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1003 18:08:13.802929   31648 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1003 18:08:13.802935   31648 command_runner.go:130] > # internal_repair = true
	I1003 18:08:13.802943   31648 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1003 18:08:13.802953   31648 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1003 18:08:13.802966   31648 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1003 18:08:13.802985   31648 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1003 18:08:13.802996   31648 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1003 18:08:13.803006   31648 command_runner.go:130] > [crio.api]
	I1003 18:08:13.803015   31648 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1003 18:08:13.803025   31648 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1003 18:08:13.803033   31648 command_runner.go:130] > # IP address on which the stream server will listen.
	I1003 18:08:13.803043   31648 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1003 18:08:13.803054   31648 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1003 18:08:13.803065   31648 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1003 18:08:13.803072   31648 command_runner.go:130] > # stream_port = "0"
	I1003 18:08:13.803083   31648 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1003 18:08:13.803090   31648 command_runner.go:130] > # stream_enable_tls = false
	I1003 18:08:13.803102   31648 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1003 18:08:13.803114   31648 command_runner.go:130] > # stream_idle_timeout = ""
	I1003 18:08:13.803124   31648 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1003 18:08:13.803136   31648 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1003 18:08:13.803146   31648 command_runner.go:130] > # stream_tls_cert = ""
	I1003 18:08:13.803156   31648 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1003 18:08:13.803166   31648 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1003 18:08:13.803175   31648 command_runner.go:130] > # stream_tls_key = ""
	I1003 18:08:13.803185   31648 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1003 18:08:13.803197   31648 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1003 18:08:13.803202   31648 command_runner.go:130] > # automatically pick up the changes.
	I1003 18:08:13.803207   31648 command_runner.go:130] > # stream_tls_ca = ""
	I1003 18:08:13.803271   31648 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1003 18:08:13.803286   31648 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1003 18:08:13.803296   31648 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1003 18:08:13.803308   31648 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1003 18:08:13.803318   31648 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1003 18:08:13.803331   31648 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1003 18:08:13.803338   31648 command_runner.go:130] > [crio.runtime]
	I1003 18:08:13.803350   31648 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1003 18:08:13.803358   31648 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1003 18:08:13.803367   31648 command_runner.go:130] > # "nofile=1024:2048"
	I1003 18:08:13.803378   31648 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1003 18:08:13.803388   31648 command_runner.go:130] > # default_ulimits = [
	I1003 18:08:13.803393   31648 command_runner.go:130] > # ]
	I1003 18:08:13.803403   31648 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1003 18:08:13.803409   31648 command_runner.go:130] > # no_pivot = false
	I1003 18:08:13.803422   31648 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1003 18:08:13.803432   31648 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1003 18:08:13.803444   31648 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1003 18:08:13.803455   31648 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1003 18:08:13.803462   31648 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1003 18:08:13.803473   31648 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1003 18:08:13.803482   31648 command_runner.go:130] > # conmon = ""
	I1003 18:08:13.803489   31648 command_runner.go:130] > # Cgroup setting for conmon
	I1003 18:08:13.803504   31648 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1003 18:08:13.803513   31648 command_runner.go:130] > conmon_cgroup = "pod"
	I1003 18:08:13.803523   31648 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1003 18:08:13.803534   31648 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1003 18:08:13.803545   31648 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1003 18:08:13.803554   31648 command_runner.go:130] > # conmon_env = [
	I1003 18:08:13.803560   31648 command_runner.go:130] > # ]
	I1003 18:08:13.803573   31648 command_runner.go:130] > # Additional environment variables to set for all the
	I1003 18:08:13.803583   31648 command_runner.go:130] > # containers. These are overridden if set in the
	I1003 18:08:13.803595   31648 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1003 18:08:13.803603   31648 command_runner.go:130] > # default_env = [
	I1003 18:08:13.803611   31648 command_runner.go:130] > # ]
	I1003 18:08:13.803620   31648 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1003 18:08:13.803635   31648 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1003 18:08:13.803644   31648 command_runner.go:130] > # selinux = false
	I1003 18:08:13.803657   31648 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1003 18:08:13.803681   31648 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1003 18:08:13.803693   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.803703   31648 command_runner.go:130] > # seccomp_profile = ""
	I1003 18:08:13.803714   31648 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1003 18:08:13.803725   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.803735   31648 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1003 18:08:13.803746   31648 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1003 18:08:13.803760   31648 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1003 18:08:13.803772   31648 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1003 18:08:13.803785   31648 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1003 18:08:13.803796   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.803803   31648 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1003 18:08:13.803817   31648 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1003 18:08:13.803827   31648 command_runner.go:130] > # the cgroup blockio controller.
	I1003 18:08:13.803833   31648 command_runner.go:130] > # blockio_config_file = ""
	I1003 18:08:13.803847   31648 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1003 18:08:13.803856   31648 command_runner.go:130] > # blockio parameters.
	I1003 18:08:13.803862   31648 command_runner.go:130] > # blockio_reload = false
	I1003 18:08:13.803869   31648 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1003 18:08:13.803877   31648 command_runner.go:130] > # irqbalance daemon.
	I1003 18:08:13.803883   31648 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1003 18:08:13.803890   31648 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1003 18:08:13.803906   31648 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1003 18:08:13.803916   31648 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1003 18:08:13.803925   31648 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1003 18:08:13.803933   31648 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1003 18:08:13.803939   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.803951   31648 command_runner.go:130] > # rdt_config_file = ""
	I1003 18:08:13.803958   31648 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1003 18:08:13.803970   31648 command_runner.go:130] > # cgroup_manager = "systemd"
	I1003 18:08:13.803987   31648 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1003 18:08:13.803998   31648 command_runner.go:130] > # separate_pull_cgroup = ""
	I1003 18:08:13.804008   31648 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1003 18:08:13.804017   31648 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1003 18:08:13.804026   31648 command_runner.go:130] > # will be added.
	I1003 18:08:13.804035   31648 command_runner.go:130] > # default_capabilities = [
	I1003 18:08:13.804043   31648 command_runner.go:130] > # 	"CHOWN",
	I1003 18:08:13.804050   31648 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1003 18:08:13.804055   31648 command_runner.go:130] > # 	"FSETID",
	I1003 18:08:13.804066   31648 command_runner.go:130] > # 	"FOWNER",
	I1003 18:08:13.804071   31648 command_runner.go:130] > # 	"SETGID",
	I1003 18:08:13.804087   31648 command_runner.go:130] > # 	"SETUID",
	I1003 18:08:13.804093   31648 command_runner.go:130] > # 	"SETPCAP",
	I1003 18:08:13.804097   31648 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1003 18:08:13.804102   31648 command_runner.go:130] > # 	"KILL",
	I1003 18:08:13.804105   31648 command_runner.go:130] > # ]
	I1003 18:08:13.804112   31648 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1003 18:08:13.804121   31648 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1003 18:08:13.804125   31648 command_runner.go:130] > # add_inheritable_capabilities = false
	I1003 18:08:13.804133   31648 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1003 18:08:13.804138   31648 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1003 18:08:13.804143   31648 command_runner.go:130] > default_sysctls = [
	I1003 18:08:13.804147   31648 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1003 18:08:13.804150   31648 command_runner.go:130] > ]
	I1003 18:08:13.804157   31648 command_runner.go:130] > # List of devices on the host that a
	I1003 18:08:13.804163   31648 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1003 18:08:13.804169   31648 command_runner.go:130] > # allowed_devices = [
	I1003 18:08:13.804173   31648 command_runner.go:130] > # 	"/dev/fuse",
	I1003 18:08:13.804178   31648 command_runner.go:130] > # 	"/dev/net/tun",
	I1003 18:08:13.804181   31648 command_runner.go:130] > # ]
	I1003 18:08:13.804188   31648 command_runner.go:130] > # List of additional devices. specified as
	I1003 18:08:13.804194   31648 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1003 18:08:13.804201   31648 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1003 18:08:13.804207   31648 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1003 18:08:13.804212   31648 command_runner.go:130] > # additional_devices = [
	I1003 18:08:13.804215   31648 command_runner.go:130] > # ]
	I1003 18:08:13.804222   31648 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1003 18:08:13.804226   31648 command_runner.go:130] > # cdi_spec_dirs = [
	I1003 18:08:13.804231   31648 command_runner.go:130] > # 	"/etc/cdi",
	I1003 18:08:13.804235   31648 command_runner.go:130] > # 	"/var/run/cdi",
	I1003 18:08:13.804237   31648 command_runner.go:130] > # ]
	I1003 18:08:13.804243   31648 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1003 18:08:13.804251   31648 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1003 18:08:13.804254   31648 command_runner.go:130] > # Defaults to false.
	I1003 18:08:13.804261   31648 command_runner.go:130] > # device_ownership_from_security_context = false
	I1003 18:08:13.804268   31648 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1003 18:08:13.804275   31648 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1003 18:08:13.804279   31648 command_runner.go:130] > # hooks_dir = [
	I1003 18:08:13.804286   31648 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1003 18:08:13.804290   31648 command_runner.go:130] > # ]
	I1003 18:08:13.804297   31648 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1003 18:08:13.804303   31648 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1003 18:08:13.804309   31648 command_runner.go:130] > # its default mounts from the following two files:
	I1003 18:08:13.804312   31648 command_runner.go:130] > #
	I1003 18:08:13.804320   31648 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1003 18:08:13.804326   31648 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1003 18:08:13.804333   31648 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1003 18:08:13.804336   31648 command_runner.go:130] > #
	I1003 18:08:13.804342   31648 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1003 18:08:13.804349   31648 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1003 18:08:13.804356   31648 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1003 18:08:13.804363   31648 command_runner.go:130] > #      only add mounts it finds in this file.
	I1003 18:08:13.804366   31648 command_runner.go:130] > #
	I1003 18:08:13.804372   31648 command_runner.go:130] > # default_mounts_file = ""
	I1003 18:08:13.804376   31648 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1003 18:08:13.804384   31648 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1003 18:08:13.804388   31648 command_runner.go:130] > # pids_limit = -1
	I1003 18:08:13.804396   31648 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1003 18:08:13.804401   31648 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1003 18:08:13.804409   31648 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1003 18:08:13.804417   31648 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1003 18:08:13.804422   31648 command_runner.go:130] > # log_size_max = -1
	I1003 18:08:13.804429   31648 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1003 18:08:13.804435   31648 command_runner.go:130] > # log_to_journald = false
	I1003 18:08:13.804441   31648 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1003 18:08:13.804447   31648 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1003 18:08:13.804451   31648 command_runner.go:130] > # Path to directory for container attach sockets.
	I1003 18:08:13.804458   31648 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1003 18:08:13.804463   31648 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1003 18:08:13.804469   31648 command_runner.go:130] > # bind_mount_prefix = ""
	I1003 18:08:13.804473   31648 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1003 18:08:13.804479   31648 command_runner.go:130] > # read_only = false
	I1003 18:08:13.804486   31648 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1003 18:08:13.804494   31648 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1003 18:08:13.804497   31648 command_runner.go:130] > # live configuration reload.
	I1003 18:08:13.804501   31648 command_runner.go:130] > # log_level = "info"
	I1003 18:08:13.804508   31648 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1003 18:08:13.804513   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.804519   31648 command_runner.go:130] > # log_filter = ""
	I1003 18:08:13.804524   31648 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1003 18:08:13.804532   31648 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1003 18:08:13.804535   31648 command_runner.go:130] > # separated by comma.
	I1003 18:08:13.804544   31648 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1003 18:08:13.804551   31648 command_runner.go:130] > # uid_mappings = ""
	I1003 18:08:13.804557   31648 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1003 18:08:13.804564   31648 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1003 18:08:13.804569   31648 command_runner.go:130] > # separated by comma.
	I1003 18:08:13.804578   31648 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1003 18:08:13.804582   31648 command_runner.go:130] > # gid_mappings = ""
	I1003 18:08:13.804589   31648 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1003 18:08:13.804595   31648 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1003 18:08:13.804603   31648 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1003 18:08:13.804612   31648 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1003 18:08:13.804618   31648 command_runner.go:130] > # minimum_mappable_uid = -1
	I1003 18:08:13.804624   31648 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1003 18:08:13.804631   31648 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1003 18:08:13.804636   31648 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1003 18:08:13.804645   31648 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1003 18:08:13.804651   31648 command_runner.go:130] > # minimum_mappable_gid = -1
	I1003 18:08:13.804657   31648 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1003 18:08:13.804669   31648 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1003 18:08:13.804674   31648 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1003 18:08:13.804680   31648 command_runner.go:130] > # ctr_stop_timeout = 30
	I1003 18:08:13.804685   31648 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1003 18:08:13.804693   31648 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1003 18:08:13.804697   31648 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1003 18:08:13.804703   31648 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1003 18:08:13.804707   31648 command_runner.go:130] > # drop_infra_ctr = true
	I1003 18:08:13.804715   31648 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1003 18:08:13.804720   31648 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1003 18:08:13.804728   31648 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1003 18:08:13.804735   31648 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1003 18:08:13.804742   31648 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1003 18:08:13.804749   31648 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1003 18:08:13.804754   31648 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1003 18:08:13.804761   31648 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1003 18:08:13.804765   31648 command_runner.go:130] > # shared_cpuset = ""
	I1003 18:08:13.804773   31648 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1003 18:08:13.804777   31648 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1003 18:08:13.804783   31648 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1003 18:08:13.804789   31648 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1003 18:08:13.804795   31648 command_runner.go:130] > # pinns_path = ""
	I1003 18:08:13.804800   31648 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1003 18:08:13.804808   31648 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1003 18:08:13.804813   31648 command_runner.go:130] > # enable_criu_support = true
	I1003 18:08:13.804819   31648 command_runner.go:130] > # Enable/disable the generation of the container,
	I1003 18:08:13.804825   31648 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1003 18:08:13.804832   31648 command_runner.go:130] > # enable_pod_events = false
	I1003 18:08:13.804837   31648 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1003 18:08:13.804844   31648 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1003 18:08:13.804848   31648 command_runner.go:130] > # default_runtime = "crun"
	I1003 18:08:13.804855   31648 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1003 18:08:13.804862   31648 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1003 18:08:13.804874   31648 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1003 18:08:13.804881   31648 command_runner.go:130] > # creation as a file is not desired either.
	I1003 18:08:13.804889   31648 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1003 18:08:13.804896   31648 command_runner.go:130] > # the hostname is being managed dynamically.
	I1003 18:08:13.804900   31648 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1003 18:08:13.804905   31648 command_runner.go:130] > # ]
	I1003 18:08:13.804912   31648 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1003 18:08:13.804920   31648 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1003 18:08:13.804926   31648 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1003 18:08:13.804931   31648 command_runner.go:130] > # Each entry in the table should follow the format:
	I1003 18:08:13.804934   31648 command_runner.go:130] > #
	I1003 18:08:13.804941   31648 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1003 18:08:13.804945   31648 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1003 18:08:13.804952   31648 command_runner.go:130] > # runtime_type = "oci"
	I1003 18:08:13.804956   31648 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1003 18:08:13.804963   31648 command_runner.go:130] > # inherit_default_runtime = false
	I1003 18:08:13.804968   31648 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1003 18:08:13.804988   31648 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1003 18:08:13.804996   31648 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1003 18:08:13.805005   31648 command_runner.go:130] > # monitor_env = []
	I1003 18:08:13.805011   31648 command_runner.go:130] > # privileged_without_host_devices = false
	I1003 18:08:13.805017   31648 command_runner.go:130] > # allowed_annotations = []
	I1003 18:08:13.805022   31648 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1003 18:08:13.805028   31648 command_runner.go:130] > # no_sync_log = false
	I1003 18:08:13.805032   31648 command_runner.go:130] > # default_annotations = {}
	I1003 18:08:13.805038   31648 command_runner.go:130] > # stream_websockets = false
	I1003 18:08:13.805042   31648 command_runner.go:130] > # seccomp_profile = ""
	I1003 18:08:13.805062   31648 command_runner.go:130] > # Where:
	I1003 18:08:13.805069   31648 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1003 18:08:13.805075   31648 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1003 18:08:13.805081   31648 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1003 18:08:13.805089   31648 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1003 18:08:13.805092   31648 command_runner.go:130] > #   in $PATH.
	I1003 18:08:13.805100   31648 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1003 18:08:13.805105   31648 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1003 18:08:13.805112   31648 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1003 18:08:13.805115   31648 command_runner.go:130] > #   state.
	I1003 18:08:13.805121   31648 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1003 18:08:13.805128   31648 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1003 18:08:13.805133   31648 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1003 18:08:13.805141   31648 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1003 18:08:13.805146   31648 command_runner.go:130] > #   the values from the default runtime on load time.
	I1003 18:08:13.805153   31648 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1003 18:08:13.805158   31648 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1003 18:08:13.805165   31648 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1003 18:08:13.805177   31648 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1003 18:08:13.805183   31648 command_runner.go:130] > #   The currently recognized values are:
	I1003 18:08:13.805190   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1003 18:08:13.805199   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1003 18:08:13.805207   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1003 18:08:13.805214   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1003 18:08:13.805221   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1003 18:08:13.805229   31648 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1003 18:08:13.805235   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1003 18:08:13.805243   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1003 18:08:13.805251   31648 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1003 18:08:13.805257   31648 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1003 18:08:13.805265   31648 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1003 18:08:13.805273   31648 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1003 18:08:13.805278   31648 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1003 18:08:13.805285   31648 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1003 18:08:13.805291   31648 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1003 18:08:13.805300   31648 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1003 18:08:13.805308   31648 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1003 18:08:13.805312   31648 command_runner.go:130] > #   deprecated option "conmon".
	I1003 18:08:13.805319   31648 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1003 18:08:13.805326   31648 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1003 18:08:13.805332   31648 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1003 18:08:13.805339   31648 command_runner.go:130] > #   should be moved to the container's cgroup
	I1003 18:08:13.805346   31648 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1003 18:08:13.805352   31648 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1003 18:08:13.805358   31648 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1003 18:08:13.805364   31648 command_runner.go:130] > #   conmon-rs by using:
	I1003 18:08:13.805370   31648 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1003 18:08:13.805379   31648 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1003 18:08:13.805388   31648 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1003 18:08:13.805395   31648 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1003 18:08:13.805401   31648 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1003 18:08:13.805415   31648 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1003 18:08:13.805423   31648 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1003 18:08:13.805430   31648 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1003 18:08:13.805437   31648 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1003 18:08:13.805449   31648 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1003 18:08:13.805455   31648 command_runner.go:130] > #   when a machine crash happens.
	I1003 18:08:13.805462   31648 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1003 18:08:13.805471   31648 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1003 18:08:13.805480   31648 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1003 18:08:13.805485   31648 command_runner.go:130] > #   seccomp profile for the runtime.
	I1003 18:08:13.805491   31648 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1003 18:08:13.805499   31648 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1003 18:08:13.805504   31648 command_runner.go:130] > #
	I1003 18:08:13.805508   31648 command_runner.go:130] > # Using the seccomp notifier feature:
	I1003 18:08:13.805513   31648 command_runner.go:130] > #
	I1003 18:08:13.805518   31648 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1003 18:08:13.805528   31648 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1003 18:08:13.805533   31648 command_runner.go:130] > #
	I1003 18:08:13.805539   31648 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1003 18:08:13.805547   31648 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1003 18:08:13.805549   31648 command_runner.go:130] > #
	I1003 18:08:13.805555   31648 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1003 18:08:13.805560   31648 command_runner.go:130] > # feature.
	I1003 18:08:13.805563   31648 command_runner.go:130] > #
	I1003 18:08:13.805568   31648 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1003 18:08:13.805576   31648 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1003 18:08:13.805582   31648 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1003 18:08:13.805589   31648 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1003 18:08:13.805595   31648 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1003 18:08:13.805600   31648 command_runner.go:130] > #
	I1003 18:08:13.805605   31648 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1003 18:08:13.805614   31648 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1003 18:08:13.805619   31648 command_runner.go:130] > #
	I1003 18:08:13.805625   31648 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1003 18:08:13.805632   31648 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1003 18:08:13.805635   31648 command_runner.go:130] > #
	I1003 18:08:13.805641   31648 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1003 18:08:13.805649   31648 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1003 18:08:13.805652   31648 command_runner.go:130] > # limitation.
	I1003 18:08:13.805656   31648 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1003 18:08:13.805666   31648 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1003 18:08:13.805671   31648 command_runner.go:130] > runtime_type = ""
	I1003 18:08:13.805675   31648 command_runner.go:130] > runtime_root = "/run/crun"
	I1003 18:08:13.805679   31648 command_runner.go:130] > inherit_default_runtime = false
	I1003 18:08:13.805683   31648 command_runner.go:130] > runtime_config_path = ""
	I1003 18:08:13.805689   31648 command_runner.go:130] > container_min_memory = ""
	I1003 18:08:13.805694   31648 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1003 18:08:13.805700   31648 command_runner.go:130] > monitor_cgroup = "pod"
	I1003 18:08:13.805704   31648 command_runner.go:130] > monitor_exec_cgroup = ""
	I1003 18:08:13.805710   31648 command_runner.go:130] > allowed_annotations = [
	I1003 18:08:13.805714   31648 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1003 18:08:13.805718   31648 command_runner.go:130] > ]
	I1003 18:08:13.805722   31648 command_runner.go:130] > privileged_without_host_devices = false
	I1003 18:08:13.805728   31648 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1003 18:08:13.805733   31648 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1003 18:08:13.805738   31648 command_runner.go:130] > runtime_type = ""
	I1003 18:08:13.805742   31648 command_runner.go:130] > runtime_root = "/run/runc"
	I1003 18:08:13.805748   31648 command_runner.go:130] > inherit_default_runtime = false
	I1003 18:08:13.805751   31648 command_runner.go:130] > runtime_config_path = ""
	I1003 18:08:13.805758   31648 command_runner.go:130] > container_min_memory = ""
	I1003 18:08:13.805762   31648 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1003 18:08:13.805767   31648 command_runner.go:130] > monitor_cgroup = "pod"
	I1003 18:08:13.805771   31648 command_runner.go:130] > monitor_exec_cgroup = ""
	I1003 18:08:13.805778   31648 command_runner.go:130] > privileged_without_host_devices = false
	I1003 18:08:13.805784   31648 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1003 18:08:13.805790   31648 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1003 18:08:13.805796   31648 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1003 18:08:13.805805   31648 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1003 18:08:13.805817   31648 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1003 18:08:13.805828   31648 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1003 18:08:13.805837   31648 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1003 18:08:13.805842   31648 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1003 18:08:13.805852   31648 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1003 18:08:13.805860   31648 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1003 18:08:13.805867   31648 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1003 18:08:13.805873   31648 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1003 18:08:13.805878   31648 command_runner.go:130] > # Example:
	I1003 18:08:13.805882   31648 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1003 18:08:13.805886   31648 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1003 18:08:13.805893   31648 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1003 18:08:13.805899   31648 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1003 18:08:13.805903   31648 command_runner.go:130] > # cpuset = "0-1"
	I1003 18:08:13.805906   31648 command_runner.go:130] > # cpushares = "5"
	I1003 18:08:13.805910   31648 command_runner.go:130] > # cpuquota = "1000"
	I1003 18:08:13.805919   31648 command_runner.go:130] > # cpuperiod = "100000"
	I1003 18:08:13.805924   31648 command_runner.go:130] > # cpulimit = "35"
	I1003 18:08:13.805933   31648 command_runner.go:130] > # Where:
	I1003 18:08:13.805940   31648 command_runner.go:130] > # The workload name is workload-type.
	I1003 18:08:13.805950   31648 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1003 18:08:13.805955   31648 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1003 18:08:13.805960   31648 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1003 18:08:13.805971   31648 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1003 18:08:13.805994   31648 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1003 18:08:13.806006   31648 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1003 18:08:13.806019   31648 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1003 18:08:13.806027   31648 command_runner.go:130] > # Default value is set to true
	I1003 18:08:13.806031   31648 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1003 18:08:13.806036   31648 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1003 18:08:13.806040   31648 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1003 18:08:13.806047   31648 command_runner.go:130] > # Default value is set to 'false'
	I1003 18:08:13.806052   31648 command_runner.go:130] > # disable_hostport_mapping = false
	I1003 18:08:13.806057   31648 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1003 18:08:13.806066   31648 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1003 18:08:13.806074   31648 command_runner.go:130] > # timezone = ""
	I1003 18:08:13.806085   31648 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1003 18:08:13.806093   31648 command_runner.go:130] > #
	I1003 18:08:13.806105   31648 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1003 18:08:13.806116   31648 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1003 18:08:13.806122   31648 command_runner.go:130] > [crio.image]
	I1003 18:08:13.806127   31648 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1003 18:08:13.806134   31648 command_runner.go:130] > # default_transport = "docker://"
	I1003 18:08:13.806139   31648 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1003 18:08:13.806147   31648 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1003 18:08:13.806154   31648 command_runner.go:130] > # global_auth_file = ""
	I1003 18:08:13.806159   31648 command_runner.go:130] > # The image used to instantiate infra containers.
	I1003 18:08:13.806165   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.806170   31648 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1003 18:08:13.806178   31648 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1003 18:08:13.806185   31648 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1003 18:08:13.806190   31648 command_runner.go:130] > # This option supports live configuration reload.
	I1003 18:08:13.806196   31648 command_runner.go:130] > # pause_image_auth_file = ""
	I1003 18:08:13.806202   31648 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1003 18:08:13.806209   31648 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1003 18:08:13.806215   31648 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1003 18:08:13.806220   31648 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1003 18:08:13.806226   31648 command_runner.go:130] > # pause_command = "/pause"
	I1003 18:08:13.806231   31648 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1003 18:08:13.806239   31648 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1003 18:08:13.806244   31648 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1003 18:08:13.806252   31648 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1003 18:08:13.806257   31648 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1003 18:08:13.806264   31648 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1003 18:08:13.806268   31648 command_runner.go:130] > # pinned_images = [
	I1003 18:08:13.806271   31648 command_runner.go:130] > # ]
	I1003 18:08:13.806278   31648 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1003 18:08:13.806286   31648 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1003 18:08:13.806293   31648 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1003 18:08:13.806301   31648 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1003 18:08:13.806306   31648 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1003 18:08:13.806312   31648 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1003 18:08:13.806318   31648 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1003 18:08:13.806325   31648 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1003 18:08:13.806333   31648 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1003 18:08:13.806341   31648 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1003 18:08:13.806347   31648 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1003 18:08:13.806353   31648 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1003 18:08:13.806358   31648 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1003 18:08:13.806366   31648 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1003 18:08:13.806369   31648 command_runner.go:130] > # changing them here.
	I1003 18:08:13.806374   31648 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1003 18:08:13.806380   31648 command_runner.go:130] > # insecure_registries = [
	I1003 18:08:13.806383   31648 command_runner.go:130] > # ]
	I1003 18:08:13.806391   31648 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1003 18:08:13.806398   31648 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1003 18:08:13.806404   31648 command_runner.go:130] > # image_volumes = "mkdir"
	I1003 18:08:13.806409   31648 command_runner.go:130] > # Temporary directory to use for storing big files
	I1003 18:08:13.806415   31648 command_runner.go:130] > # big_files_temporary_dir = ""
	I1003 18:08:13.806420   31648 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1003 18:08:13.806429   31648 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1003 18:08:13.806435   31648 command_runner.go:130] > # auto_reload_registries = false
	I1003 18:08:13.806441   31648 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1003 18:08:13.806450   31648 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1003 18:08:13.806467   31648 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1003 18:08:13.806473   31648 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1003 18:08:13.806477   31648 command_runner.go:130] > # The mode of short name resolution.
	I1003 18:08:13.806484   31648 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1003 18:08:13.806492   31648 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1003 18:08:13.806499   31648 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1003 18:08:13.806503   31648 command_runner.go:130] > # short_name_mode = "enforcing"
	I1003 18:08:13.806511   31648 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1003 18:08:13.806518   31648 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1003 18:08:13.806523   31648 command_runner.go:130] > # oci_artifact_mount_support = true
	I1003 18:08:13.806530   31648 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1003 18:08:13.806535   31648 command_runner.go:130] > # CNI plugins.
	I1003 18:08:13.806541   31648 command_runner.go:130] > [crio.network]
	I1003 18:08:13.806546   31648 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1003 18:08:13.806553   31648 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1003 18:08:13.806557   31648 command_runner.go:130] > # cni_default_network = ""
	I1003 18:08:13.806562   31648 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1003 18:08:13.806568   31648 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1003 18:08:13.806573   31648 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1003 18:08:13.806580   31648 command_runner.go:130] > # plugin_dirs = [
	I1003 18:08:13.806584   31648 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1003 18:08:13.806589   31648 command_runner.go:130] > # ]
	I1003 18:08:13.806593   31648 command_runner.go:130] > # List of included pod metrics.
	I1003 18:08:13.806599   31648 command_runner.go:130] > # included_pod_metrics = [
	I1003 18:08:13.806603   31648 command_runner.go:130] > # ]
	I1003 18:08:13.806610   31648 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1003 18:08:13.806614   31648 command_runner.go:130] > [crio.metrics]
	I1003 18:08:13.806618   31648 command_runner.go:130] > # Globally enable or disable metrics support.
	I1003 18:08:13.806624   31648 command_runner.go:130] > # enable_metrics = false
	I1003 18:08:13.806629   31648 command_runner.go:130] > # Specify enabled metrics collectors.
	I1003 18:08:13.806635   31648 command_runner.go:130] > # Per default all metrics are enabled.
	I1003 18:08:13.806640   31648 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1003 18:08:13.806647   31648 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1003 18:08:13.806654   31648 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1003 18:08:13.806662   31648 command_runner.go:130] > # metrics_collectors = [
	I1003 18:08:13.806668   31648 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1003 18:08:13.806672   31648 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1003 18:08:13.806676   31648 command_runner.go:130] > # 	"containers_oom_total",
	I1003 18:08:13.806679   31648 command_runner.go:130] > # 	"processes_defunct",
	I1003 18:08:13.806682   31648 command_runner.go:130] > # 	"operations_total",
	I1003 18:08:13.806687   31648 command_runner.go:130] > # 	"operations_latency_seconds",
	I1003 18:08:13.806691   31648 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1003 18:08:13.806694   31648 command_runner.go:130] > # 	"operations_errors_total",
	I1003 18:08:13.806697   31648 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1003 18:08:13.806701   31648 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1003 18:08:13.806705   31648 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1003 18:08:13.806709   31648 command_runner.go:130] > # 	"image_pulls_success_total",
	I1003 18:08:13.806713   31648 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1003 18:08:13.806716   31648 command_runner.go:130] > # 	"containers_oom_count_total",
	I1003 18:08:13.806720   31648 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1003 18:08:13.806724   31648 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1003 18:08:13.806728   31648 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1003 18:08:13.806730   31648 command_runner.go:130] > # ]
	I1003 18:08:13.806736   31648 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1003 18:08:13.806739   31648 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1003 18:08:13.806744   31648 command_runner.go:130] > # The port on which the metrics server will listen.
	I1003 18:08:13.806747   31648 command_runner.go:130] > # metrics_port = 9090
	I1003 18:08:13.806751   31648 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1003 18:08:13.806755   31648 command_runner.go:130] > # metrics_socket = ""
	I1003 18:08:13.806759   31648 command_runner.go:130] > # The certificate for the secure metrics server.
	I1003 18:08:13.806765   31648 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1003 18:08:13.806770   31648 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1003 18:08:13.806774   31648 command_runner.go:130] > # certificate on any modification event.
	I1003 18:08:13.806780   31648 command_runner.go:130] > # metrics_cert = ""
	I1003 18:08:13.806785   31648 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1003 18:08:13.806791   31648 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1003 18:08:13.806795   31648 command_runner.go:130] > # metrics_key = ""
	I1003 18:08:13.806802   31648 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1003 18:08:13.806805   31648 command_runner.go:130] > [crio.tracing]
	I1003 18:08:13.806810   31648 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1003 18:08:13.806816   31648 command_runner.go:130] > # enable_tracing = false
	I1003 18:08:13.806821   31648 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1003 18:08:13.806827   31648 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1003 18:08:13.806834   31648 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1003 18:08:13.806841   31648 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1003 18:08:13.806845   31648 command_runner.go:130] > # CRI-O NRI configuration.
	I1003 18:08:13.806850   31648 command_runner.go:130] > [crio.nri]
	I1003 18:08:13.806854   31648 command_runner.go:130] > # Globally enable or disable NRI.
	I1003 18:08:13.806860   31648 command_runner.go:130] > # enable_nri = true
	I1003 18:08:13.806864   31648 command_runner.go:130] > # NRI socket to listen on.
	I1003 18:08:13.806870   31648 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1003 18:08:13.806874   31648 command_runner.go:130] > # NRI plugin directory to use.
	I1003 18:08:13.806880   31648 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1003 18:08:13.806885   31648 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1003 18:08:13.806891   31648 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1003 18:08:13.806896   31648 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1003 18:08:13.806926   31648 command_runner.go:130] > # nri_disable_connections = false
	I1003 18:08:13.806934   31648 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1003 18:08:13.806938   31648 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1003 18:08:13.806944   31648 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1003 18:08:13.806948   31648 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1003 18:08:13.806955   31648 command_runner.go:130] > # NRI default validator configuration.
	I1003 18:08:13.806961   31648 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1003 18:08:13.806968   31648 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1003 18:08:13.806972   31648 command_runner.go:130] > # can be restricted/rejected:
	I1003 18:08:13.806990   31648 command_runner.go:130] > # - OCI hook injection
	I1003 18:08:13.806998   31648 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1003 18:08:13.807007   31648 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1003 18:08:13.807014   31648 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1003 18:08:13.807024   31648 command_runner.go:130] > # - adjustment of linux namespaces
	I1003 18:08:13.807033   31648 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1003 18:08:13.807041   31648 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1003 18:08:13.807046   31648 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1003 18:08:13.807051   31648 command_runner.go:130] > #
	I1003 18:08:13.807055   31648 command_runner.go:130] > # [crio.nri.default_validator]
	I1003 18:08:13.807060   31648 command_runner.go:130] > # nri_enable_default_validator = false
	I1003 18:08:13.807067   31648 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1003 18:08:13.807072   31648 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1003 18:08:13.807079   31648 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1003 18:08:13.807083   31648 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1003 18:08:13.807088   31648 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1003 18:08:13.807094   31648 command_runner.go:130] > # nri_validator_required_plugins = [
	I1003 18:08:13.807097   31648 command_runner.go:130] > # ]
	I1003 18:08:13.807104   31648 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1003 18:08:13.807109   31648 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1003 18:08:13.807115   31648 command_runner.go:130] > [crio.stats]
	I1003 18:08:13.807121   31648 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1003 18:08:13.807128   31648 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1003 18:08:13.807132   31648 command_runner.go:130] > # stats_collection_period = 0
	I1003 18:08:13.807141   31648 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1003 18:08:13.807147   31648 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1003 18:08:13.807154   31648 command_runner.go:130] > # collection_period = 0
	I1003 18:08:13.807173   31648 command_runner.go:130] ! time="2025-10-03T18:08:13.78773481Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1003 18:08:13.807183   31648 command_runner.go:130] ! time="2025-10-03T18:08:13.787758775Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1003 18:08:13.807194   31648 command_runner.go:130] ! time="2025-10-03T18:08:13.787775454Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1003 18:08:13.807203   31648 command_runner.go:130] ! time="2025-10-03T18:08:13.78779273Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1003 18:08:13.807213   31648 command_runner.go:130] ! time="2025-10-03T18:08:13.7878475Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:08:13.807222   31648 command_runner.go:130] ! time="2025-10-03T18:08:13.788021357Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1003 18:08:13.807234   31648 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1003 18:08:13.807290   31648 cni.go:84] Creating CNI manager for ""
	I1003 18:08:13.807303   31648 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 18:08:13.807321   31648 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 18:08:13.807344   31648 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-889240 NodeName:functional-889240 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 18:08:13.807460   31648 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-889240"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:08:13.807513   31648 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 18:08:13.814815   31648 command_runner.go:130] > kubeadm
	I1003 18:08:13.814829   31648 command_runner.go:130] > kubectl
	I1003 18:08:13.814834   31648 command_runner.go:130] > kubelet
	I1003 18:08:13.815427   31648 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:08:13.815489   31648 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 18:08:13.822648   31648 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1003 18:08:13.834615   31648 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 18:08:13.846006   31648 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1003 18:08:13.857402   31648 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1003 18:08:13.860916   31648 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1003 18:08:13.860998   31648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:08:13.942536   31648 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:08:13.955386   31648 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240 for IP: 192.168.49.2
	I1003 18:08:13.955406   31648 certs.go:195] generating shared ca certs ...
	I1003 18:08:13.955424   31648 certs.go:227] acquiring lock for ca certs: {Name:mk92d1e8e469cb44d9924ff8abf5ecf0a8ce4e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:08:13.955571   31648 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key
	I1003 18:08:13.955642   31648 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key
	I1003 18:08:13.955660   31648 certs.go:257] generating profile certs ...
	I1003 18:08:13.955770   31648 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.key
	I1003 18:08:13.955933   31648 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.key.eb3f8f7c
	I1003 18:08:13.956034   31648 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.key
	I1003 18:08:13.956049   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 18:08:13.956072   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 18:08:13.956090   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 18:08:13.956107   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 18:08:13.956123   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 18:08:13.956140   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 18:08:13.956160   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 18:08:13.956185   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 18:08:13.956244   31648 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem (1338 bytes)
	W1003 18:08:13.956286   31648 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212_empty.pem, impossibly tiny 0 bytes
	I1003 18:08:13.956298   31648 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:08:13.956331   31648 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:08:13.956364   31648 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:08:13.956397   31648 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem (1675 bytes)
	I1003 18:08:13.956451   31648 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:08:13.956487   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem -> /usr/share/ca-certificates/12212.pem
	I1003 18:08:13.956507   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /usr/share/ca-certificates/122122.pem
	I1003 18:08:13.956528   31648 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:08:13.957144   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:08:13.973779   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 18:08:13.990161   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:08:14.006157   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:08:14.022253   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 18:08:14.038198   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:08:14.054095   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:08:14.069959   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 18:08:14.085810   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem --> /usr/share/ca-certificates/12212.pem (1338 bytes)
	I1003 18:08:14.101812   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /usr/share/ca-certificates/122122.pem (1708 bytes)
	I1003 18:08:14.117716   31648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:08:14.134093   31648 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:08:14.145835   31648 ssh_runner.go:195] Run: openssl version
	I1003 18:08:14.151369   31648 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1003 18:08:14.151660   31648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122122.pem && ln -fs /usr/share/ca-certificates/122122.pem /etc/ssl/certs/122122.pem"
	I1003 18:08:14.160011   31648 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122122.pem
	I1003 18:08:14.163572   31648 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:08:14.163595   31648 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:08:14.163631   31648 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122122.pem
	I1003 18:08:14.196823   31648 command_runner.go:130] > 3ec20f2e
	I1003 18:08:14.197073   31648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122122.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:08:14.204835   31648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:08:14.212908   31648 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:08:14.216400   31648 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:08:14.216425   31648 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:08:14.216454   31648 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:08:14.249946   31648 command_runner.go:130] > b5213941
	I1003 18:08:14.250032   31648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:08:14.257940   31648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12212.pem && ln -fs /usr/share/ca-certificates/12212.pem /etc/ssl/certs/12212.pem"
	I1003 18:08:14.266302   31648 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12212.pem
	I1003 18:08:14.269939   31648 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:08:14.269964   31648 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:08:14.270013   31648 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12212.pem
	I1003 18:08:14.303247   31648 command_runner.go:130] > 51391683
	I1003 18:08:14.303479   31648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12212.pem /etc/ssl/certs/51391683.0"
	I1003 18:08:14.311263   31648 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:08:14.314772   31648 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:08:14.314798   31648 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1003 18:08:14.314807   31648 command_runner.go:130] > Device: 8,1	Inode: 579409      Links: 1
	I1003 18:08:14.314815   31648 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1003 18:08:14.314823   31648 command_runner.go:130] > Access: 2025-10-03 18:04:07.266428775 +0000
	I1003 18:08:14.314828   31648 command_runner.go:130] > Modify: 2025-10-03 18:00:02.305264452 +0000
	I1003 18:08:14.314842   31648 command_runner.go:130] > Change: 2025-10-03 18:00:02.305264452 +0000
	I1003 18:08:14.314851   31648 command_runner.go:130] >  Birth: 2025-10-03 18:00:02.305264452 +0000
	I1003 18:08:14.314920   31648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 18:08:14.349195   31648 command_runner.go:130] > Certificate will not expire
	I1003 18:08:14.349493   31648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 18:08:14.382820   31648 command_runner.go:130] > Certificate will not expire
	I1003 18:08:14.383063   31648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 18:08:14.416849   31648 command_runner.go:130] > Certificate will not expire
	I1003 18:08:14.416933   31648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 18:08:14.450508   31648 command_runner.go:130] > Certificate will not expire
	I1003 18:08:14.450572   31648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 18:08:14.483927   31648 command_runner.go:130] > Certificate will not expire
	I1003 18:08:14.484012   31648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 18:08:14.517658   31648 command_runner.go:130] > Certificate will not expire
	I1003 18:08:14.518008   31648 kubeadm.go:400] StartCluster: {Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:08:14.518097   31648 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:08:14.518174   31648 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:08:14.544326   31648 cri.go:89] found id: ""
	I1003 18:08:14.544381   31648 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:08:14.551440   31648 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1003 18:08:14.551457   31648 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1003 18:08:14.551463   31648 command_runner.go:130] > /var/lib/minikube/etcd:
	I1003 18:08:14.551962   31648 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1003 18:08:14.551995   31648 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1003 18:08:14.552044   31648 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 18:08:14.559024   31648 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:08:14.559104   31648 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-889240" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:08:14.559135   31648 kubeconfig.go:62] /home/jenkins/minikube-integration/21625-8669/kubeconfig needs updating (will repair): [kubeconfig missing "functional-889240" cluster setting kubeconfig missing "functional-889240" context setting]
	I1003 18:08:14.559426   31648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/kubeconfig: {Name:mk6b7939515483ba69c1f358a3a21494f4ead7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:08:14.562686   31648 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:08:14.562840   31648 kapi.go:59] client config for functional-889240: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.key", CAFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 18:08:14.563280   31648 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1003 18:08:14.563295   31648 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1003 18:08:14.563300   31648 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1003 18:08:14.563305   31648 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1003 18:08:14.563310   31648 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1003 18:08:14.563344   31648 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1003 18:08:14.563668   31648 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 18:08:14.571379   31648 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1003 18:08:14.571411   31648 kubeadm.go:601] duration metric: took 19.407047ms to restartPrimaryControlPlane
	I1003 18:08:14.571423   31648 kubeadm.go:402] duration metric: took 53.42211ms to StartCluster
	I1003 18:08:14.571440   31648 settings.go:142] acquiring lock: {Name:mk6bc950503a8f341b8aacc07a8bc72d5db3a25c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:08:14.571546   31648 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:08:14.572080   31648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/kubeconfig: {Name:mk6b7939515483ba69c1f358a3a21494f4ead7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:08:14.572261   31648 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 18:08:14.572328   31648 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 18:08:14.572418   31648 addons.go:69] Setting storage-provisioner=true in profile "functional-889240"
	I1003 18:08:14.572440   31648 addons.go:238] Setting addon storage-provisioner=true in "functional-889240"
	I1003 18:08:14.572443   31648 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:08:14.572454   31648 addons.go:69] Setting default-storageclass=true in profile "functional-889240"
	I1003 18:08:14.572472   31648 host.go:66] Checking if "functional-889240" exists ...
	I1003 18:08:14.572481   31648 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-889240"
	I1003 18:08:14.572708   31648 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
	I1003 18:08:14.572822   31648 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
	I1003 18:08:14.574934   31648 out.go:179] * Verifying Kubernetes components...
	I1003 18:08:14.575948   31648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:08:14.591352   31648 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:08:14.591562   31648 kapi.go:59] client config for functional-889240: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.key", CAFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 18:08:14.591895   31648 addons.go:238] Setting addon default-storageclass=true in "functional-889240"
	I1003 18:08:14.591927   31648 host.go:66] Checking if "functional-889240" exists ...
	I1003 18:08:14.592300   31648 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
	I1003 18:08:14.592939   31648 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 18:08:14.594638   31648 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:14.594655   31648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 18:08:14.594693   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:14.617423   31648 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:14.617446   31648 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 18:08:14.617507   31648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:08:14.620273   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:14.639039   31648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:08:14.672807   31648 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:08:14.684788   31648 node_ready.go:35] waiting up to 6m0s for node "functional-889240" to be "Ready" ...
	I1003 18:08:14.684921   31648 type.go:168] "Request Body" body=""
	I1003 18:08:14.685003   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:14.685252   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:14.730950   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:14.745066   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:14.786328   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:14.786378   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:14.786409   31648 retry.go:31] will retry after 270.951246ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:14.798186   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:14.798232   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:14.798258   31648 retry.go:31] will retry after 360.152106ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.057602   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:15.106841   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:15.109109   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.109138   31648 retry.go:31] will retry after 397.537911ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.159331   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:15.185817   31648 type.go:168] "Request Body" body=""
	I1003 18:08:15.185883   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:15.186219   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:15.210176   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:15.210221   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.210238   31648 retry.go:31] will retry after 493.012433ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.507675   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:15.555577   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:15.557666   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.557696   31648 retry.go:31] will retry after 440.122822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.685949   31648 type.go:168] "Request Body" body=""
	I1003 18:08:15.686038   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:15.686370   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:15.703496   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:15.753710   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:15.753758   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.753776   31648 retry.go:31] will retry after 795.152031ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:15.998073   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:16.047743   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:16.047782   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:16.047802   31648 retry.go:31] will retry after 705.62402ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:16.185279   31648 type.go:168] "Request Body" body=""
	I1003 18:08:16.185360   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:16.185691   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:16.549101   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:16.597196   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:16.599345   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:16.599377   31648 retry.go:31] will retry after 940.255489ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:16.685633   31648 type.go:168] "Request Body" body=""
	I1003 18:08:16.685701   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:16.685999   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:16.686058   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:16.754204   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:16.801452   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:16.803457   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:16.803489   31648 retry.go:31] will retry after 1.24021873s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:17.184970   31648 type.go:168] "Request Body" body=""
	I1003 18:08:17.185063   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:17.185424   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:17.539832   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:17.590758   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:17.590802   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:17.590823   31648 retry.go:31] will retry after 1.395425458s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:17.685012   31648 type.go:168] "Request Body" body=""
	I1003 18:08:17.685095   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:17.685454   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:18.043958   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:18.094735   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:18.094776   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:18.094793   31648 retry.go:31] will retry after 1.596032935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:18.185003   31648 type.go:168] "Request Body" body=""
	I1003 18:08:18.185100   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:18.185407   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:18.685017   31648 type.go:168] "Request Body" body=""
	I1003 18:08:18.685100   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:18.685393   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:18.986876   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:19.035593   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:19.038332   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:19.038363   31648 retry.go:31] will retry after 1.200373965s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:19.185671   31648 type.go:168] "Request Body" body=""
	I1003 18:08:19.185764   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:19.186105   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:19.186155   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:19.686009   31648 type.go:168] "Request Body" body=""
	I1003 18:08:19.686091   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:19.686423   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:19.691557   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:19.741190   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:19.743532   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:19.743567   31648 retry.go:31] will retry after 3.569328126s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:20.185118   31648 type.go:168] "Request Body" body=""
	I1003 18:08:20.185184   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:20.185523   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:20.239734   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:20.289529   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:20.291706   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:20.291741   31648 retry.go:31] will retry after 1.81500567s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:20.685251   31648 type.go:168] "Request Body" body=""
	I1003 18:08:20.685325   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:20.685635   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:21.185510   31648 type.go:168] "Request Body" body=""
	I1003 18:08:21.185583   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:21.185888   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:21.685727   31648 type.go:168] "Request Body" body=""
	I1003 18:08:21.685836   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:21.686208   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:21.686275   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:22.107768   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:22.158032   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:22.158081   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:22.158100   31648 retry.go:31] will retry after 3.676335527s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:22.185231   31648 type.go:168] "Request Body" body=""
	I1003 18:08:22.185319   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:22.185614   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:22.685370   31648 type.go:168] "Request Body" body=""
	I1003 18:08:22.685451   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:22.685806   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:23.185639   31648 type.go:168] "Request Body" body=""
	I1003 18:08:23.185743   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:23.186048   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:23.313354   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:23.364461   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:23.364519   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:23.364543   31648 retry.go:31] will retry after 3.926696561s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:23.685958   31648 type.go:168] "Request Body" body=""
	I1003 18:08:23.686044   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:23.686339   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:23.686396   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:24.186039   31648 type.go:168] "Request Body" body=""
	I1003 18:08:24.186135   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:24.186455   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:24.685152   31648 type.go:168] "Request Body" body=""
	I1003 18:08:24.685228   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:24.685576   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:25.185310   31648 type.go:168] "Request Body" body=""
	I1003 18:08:25.185375   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:25.185715   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:25.685392   31648 type.go:168] "Request Body" body=""
	I1003 18:08:25.685465   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:25.685774   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:25.835120   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:25.883846   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:25.886330   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:25.886360   31648 retry.go:31] will retry after 9.086319041s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:26.185864   31648 type.go:168] "Request Body" body=""
	I1003 18:08:26.185950   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:26.186312   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:26.186362   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:26.685071   31648 type.go:168] "Request Body" body=""
	I1003 18:08:26.685149   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:26.685486   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:27.185231   31648 type.go:168] "Request Body" body=""
	I1003 18:08:27.185303   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:27.185670   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:27.291951   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:27.344646   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:27.344705   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:27.344728   31648 retry.go:31] will retry after 9.233335187s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:27.685027   31648 type.go:168] "Request Body" body=""
	I1003 18:08:27.685131   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:27.685438   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:28.185051   31648 type.go:168] "Request Body" body=""
	I1003 18:08:28.185123   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:28.185416   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:28.685061   31648 type.go:168] "Request Body" body=""
	I1003 18:08:28.685136   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:28.685436   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:28.685488   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:29.185050   31648 type.go:168] "Request Body" body=""
	I1003 18:08:29.185116   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:29.185410   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:29.685011   31648 type.go:168] "Request Body" body=""
	I1003 18:08:29.685107   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:29.685414   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:30.185028   31648 type.go:168] "Request Body" body=""
	I1003 18:08:30.185114   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:30.185401   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:30.685020   31648 type.go:168] "Request Body" body=""
	I1003 18:08:30.685097   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:30.685428   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:31.185273   31648 type.go:168] "Request Body" body=""
	I1003 18:08:31.185345   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:31.185680   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:31.185733   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:31.685419   31648 type.go:168] "Request Body" body=""
	I1003 18:08:31.685507   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:31.685800   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:32.185743   31648 type.go:168] "Request Body" body=""
	I1003 18:08:32.185852   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:32.186217   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:32.684952   31648 type.go:168] "Request Body" body=""
	I1003 18:08:32.685038   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:32.685332   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:33.185084   31648 type.go:168] "Request Body" body=""
	I1003 18:08:33.185176   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:33.185536   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:33.685288   31648 type.go:168] "Request Body" body=""
	I1003 18:08:33.685369   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:33.685664   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:33.685725   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:34.185445   31648 type.go:168] "Request Body" body=""
	I1003 18:08:34.185522   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:34.185879   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:34.685599   31648 type.go:168] "Request Body" body=""
	I1003 18:08:34.685698   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:34.686052   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:34.973491   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:35.025995   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:35.026042   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:35.026060   31648 retry.go:31] will retry after 13.835197481s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:35.185336   31648 type.go:168] "Request Body" body=""
	I1003 18:08:35.185419   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:35.185713   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:35.685344   31648 type.go:168] "Request Body" body=""
	I1003 18:08:35.685434   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:35.685770   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:35.685857   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:36.185648   31648 type.go:168] "Request Body" body=""
	I1003 18:08:36.185719   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:36.186013   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:36.578491   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:36.629045   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:36.629094   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:36.629123   31648 retry.go:31] will retry after 7.439097167s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:36.685279   31648 type.go:168] "Request Body" body=""
	I1003 18:08:36.685356   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:36.685671   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:37.185440   31648 type.go:168] "Request Body" body=""
	I1003 18:08:37.185503   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:37.185805   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:37.685609   31648 type.go:168] "Request Body" body=""
	I1003 18:08:37.685705   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:37.686055   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:37.686118   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:38.185875   31648 type.go:168] "Request Body" body=""
	I1003 18:08:38.185966   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:38.186273   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:38.685047   31648 type.go:168] "Request Body" body=""
	I1003 18:08:38.685111   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:38.685422   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:39.185132   31648 type.go:168] "Request Body" body=""
	I1003 18:08:39.185219   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:39.185524   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:39.685244   31648 type.go:168] "Request Body" body=""
	I1003 18:08:39.685308   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:39.685620   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:40.185346   31648 type.go:168] "Request Body" body=""
	I1003 18:08:40.185409   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:40.185703   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:40.185782   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:40.685452   31648 type.go:168] "Request Body" body=""
	I1003 18:08:40.685560   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:40.685889   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:41.185504   31648 type.go:168] "Request Body" body=""
	I1003 18:08:41.185583   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:41.185889   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:41.685695   31648 type.go:168] "Request Body" body=""
	I1003 18:08:41.685767   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:41.686090   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:42.185782   31648 type.go:168] "Request Body" body=""
	I1003 18:08:42.185862   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:42.186224   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:42.186281   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:42.685859   31648 type.go:168] "Request Body" body=""
	I1003 18:08:42.685952   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:42.686271   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:43.185893   31648 type.go:168] "Request Body" body=""
	I1003 18:08:43.185999   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:43.186296   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:43.685944   31648 type.go:168] "Request Body" body=""
	I1003 18:08:43.686017   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:43.686309   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:44.068807   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:44.118932   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:44.118993   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:44.119018   31648 retry.go:31] will retry after 11.649333138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:44.185207   31648 type.go:168] "Request Body" body=""
	I1003 18:08:44.185271   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:44.185562   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:44.685354   31648 type.go:168] "Request Body" body=""
	I1003 18:08:44.685421   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:44.685759   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:44.685811   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:45.185341   31648 type.go:168] "Request Body" body=""
	I1003 18:08:45.185433   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:45.185739   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:45.685457   31648 type.go:168] "Request Body" body=""
	I1003 18:08:45.685529   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:45.685878   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:46.185715   31648 type.go:168] "Request Body" body=""
	I1003 18:08:46.185814   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:46.186178   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:46.685956   31648 type.go:168] "Request Body" body=""
	I1003 18:08:46.686040   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:46.686342   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:46.686417   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:47.185108   31648 type.go:168] "Request Body" body=""
	I1003 18:08:47.185173   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:47.185454   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:47.685185   31648 type.go:168] "Request Body" body=""
	I1003 18:08:47.685263   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:47.685629   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:48.185337   31648 type.go:168] "Request Body" body=""
	I1003 18:08:48.185401   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:48.185716   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:48.685423   31648 type.go:168] "Request Body" body=""
	I1003 18:08:48.685491   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:48.685791   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:48.862137   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:48.911551   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:48.911612   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:48.911635   31648 retry.go:31] will retry after 10.230842759s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:49.184986   31648 type.go:168] "Request Body" body=""
	I1003 18:08:49.185056   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:49.185386   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:49.185450   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:49.685132   31648 type.go:168] "Request Body" body=""
	I1003 18:08:49.685197   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:49.685528   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:50.185253   31648 type.go:168] "Request Body" body=""
	I1003 18:08:50.185319   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:50.185649   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:50.685352   31648 type.go:168] "Request Body" body=""
	I1003 18:08:50.685456   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:50.685777   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:51.185614   31648 type.go:168] "Request Body" body=""
	I1003 18:08:51.185727   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:51.186089   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:51.186142   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:51.685865   31648 type.go:168] "Request Body" body=""
	I1003 18:08:51.685970   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:51.686292   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:52.185039   31648 type.go:168] "Request Body" body=""
	I1003 18:08:52.185145   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:52.185488   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:52.685238   31648 type.go:168] "Request Body" body=""
	I1003 18:08:52.685302   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:52.685617   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:53.185313   31648 type.go:168] "Request Body" body=""
	I1003 18:08:53.185377   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:53.185697   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:53.685459   31648 type.go:168] "Request Body" body=""
	I1003 18:08:53.685528   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:53.685880   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:53.685930   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:54.185736   31648 type.go:168] "Request Body" body=""
	I1003 18:08:54.185800   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:54.186122   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:54.685875   31648 type.go:168] "Request Body" body=""
	I1003 18:08:54.685940   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:54.686284   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:55.185038   31648 type.go:168] "Request Body" body=""
	I1003 18:08:55.185103   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:55.185420   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:55.685122   31648 type.go:168] "Request Body" body=""
	I1003 18:08:55.685213   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:55.685505   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:55.768789   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:08:55.820187   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:55.820247   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:55.820271   31648 retry.go:31] will retry after 17.817355848s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:56.185846   31648 type.go:168] "Request Body" body=""
	I1003 18:08:56.185913   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:56.186233   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:56.186374   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:56.685948   31648 type.go:168] "Request Body" body=""
	I1003 18:08:56.686081   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:56.686423   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:57.185019   31648 type.go:168] "Request Body" body=""
	I1003 18:08:57.185105   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:57.185399   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:57.684931   31648 type.go:168] "Request Body" body=""
	I1003 18:08:57.685041   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:57.685319   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:58.185047   31648 type.go:168] "Request Body" body=""
	I1003 18:08:58.185109   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:58.185402   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:58.685125   31648 type.go:168] "Request Body" body=""
	I1003 18:08:58.685211   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:58.685543   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:08:58.685617   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:08:59.143069   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:08:59.185821   31648 type.go:168] "Request Body" body=""
	I1003 18:08:59.185917   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:59.186232   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:08:59.193474   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:08:59.193510   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:59.193527   31648 retry.go:31] will retry after 25.255183485s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:08:59.685108   31648 type.go:168] "Request Body" body=""
	I1003 18:08:59.685198   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:08:59.685504   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:00.185069   31648 type.go:168] "Request Body" body=""
	I1003 18:09:00.185163   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:00.185465   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:00.685045   31648 type.go:168] "Request Body" body=""
	I1003 18:09:00.685107   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:00.685401   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:01.185250   31648 type.go:168] "Request Body" body=""
	I1003 18:09:01.185349   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:01.185688   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:01.185754   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:01.685310   31648 type.go:168] "Request Body" body=""
	I1003 18:09:01.685402   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:01.685720   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:02.185253   31648 type.go:168] "Request Body" body=""
	I1003 18:09:02.185346   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:02.185664   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:02.685182   31648 type.go:168] "Request Body" body=""
	I1003 18:09:02.685247   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:02.685567   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:03.185121   31648 type.go:168] "Request Body" body=""
	I1003 18:09:03.185184   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:03.185472   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:03.685069   31648 type.go:168] "Request Body" body=""
	I1003 18:09:03.685140   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:03.685473   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:03.685548   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:04.185138   31648 type.go:168] "Request Body" body=""
	I1003 18:09:04.185208   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:04.185511   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:04.685397   31648 type.go:168] "Request Body" body=""
	I1003 18:09:04.685498   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:04.685815   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:05.185368   31648 type.go:168] "Request Body" body=""
	I1003 18:09:05.185430   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:05.185752   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:05.685306   31648 type.go:168] "Request Body" body=""
	I1003 18:09:05.685399   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:05.685722   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:05.685773   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:06.185506   31648 type.go:168] "Request Body" body=""
	I1003 18:09:06.185596   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:06.185889   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:06.685509   31648 type.go:168] "Request Body" body=""
	I1003 18:09:06.685600   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:06.685920   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:07.185528   31648 type.go:168] "Request Body" body=""
	I1003 18:09:07.185591   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:07.185930   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:07.685592   31648 type.go:168] "Request Body" body=""
	I1003 18:09:07.685666   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:07.686000   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:07.686050   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:08.185578   31648 type.go:168] "Request Body" body=""
	I1003 18:09:08.185676   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:08.185969   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:08.685655   31648 type.go:168] "Request Body" body=""
	I1003 18:09:08.685728   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:08.686124   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:09.185744   31648 type.go:168] "Request Body" body=""
	I1003 18:09:09.185811   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:09.186109   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:09.685870   31648 type.go:168] "Request Body" body=""
	I1003 18:09:09.685938   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:09.686249   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:09.686300   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:10.185899   31648 type.go:168] "Request Body" body=""
	I1003 18:09:10.185995   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:10.186296   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:10.684943   31648 type.go:168] "Request Body" body=""
	I1003 18:09:10.685033   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:10.685323   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:11.185004   31648 type.go:168] "Request Body" body=""
	I1003 18:09:11.185066   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:11.185370   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:11.684959   31648 type.go:168] "Request Body" body=""
	I1003 18:09:11.685050   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:11.685368   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:12.184955   31648 type.go:168] "Request Body" body=""
	I1003 18:09:12.185063   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:12.185367   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:12.185420   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:12.684941   31648 type.go:168] "Request Body" body=""
	I1003 18:09:12.685054   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:12.685356   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:13.185955   31648 type.go:168] "Request Body" body=""
	I1003 18:09:13.186031   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:13.186349   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:13.637912   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:09:13.685539   31648 type.go:168] "Request Body" body=""
	I1003 18:09:13.685624   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:13.685989   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:13.686249   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:09:13.688536   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:09:13.688567   31648 retry.go:31] will retry after 16.395640375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:09:14.185086   31648 type.go:168] "Request Body" body=""
	I1003 18:09:14.185158   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:14.185474   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:14.185528   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:14.685417   31648 type.go:168] "Request Body" body=""
	I1003 18:09:14.685504   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:14.685861   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:15.185730   31648 type.go:168] "Request Body" body=""
	I1003 18:09:15.185803   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:15.186135   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:15.685950   31648 type.go:168] "Request Body" body=""
	I1003 18:09:15.686047   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:15.686390   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:16.185313   31648 type.go:168] "Request Body" body=""
	I1003 18:09:16.185381   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:16.185711   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:16.185784   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:16.685449   31648 type.go:168] "Request Body" body=""
	I1003 18:09:16.685527   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:16.685889   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:17.185723   31648 type.go:168] "Request Body" body=""
	I1003 18:09:17.185815   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:17.186154   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:17.685963   31648 type.go:168] "Request Body" body=""
	I1003 18:09:17.686103   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:17.686430   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:18.185163   31648 type.go:168] "Request Body" body=""
	I1003 18:09:18.185228   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:18.185536   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:18.685287   31648 type.go:168] "Request Body" body=""
	I1003 18:09:18.685398   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:18.685756   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:18.685818   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:19.185602   31648 type.go:168] "Request Body" body=""
	I1003 18:09:19.185674   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:19.186025   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:19.685824   31648 type.go:168] "Request Body" body=""
	I1003 18:09:19.685902   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:19.686264   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:20.185104   31648 type.go:168] "Request Body" body=""
	I1003 18:09:20.185178   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:20.185565   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:20.685343   31648 type.go:168] "Request Body" body=""
	I1003 18:09:20.685448   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:20.685814   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:20.685865   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:21.185641   31648 type.go:168] "Request Body" body=""
	I1003 18:09:21.185717   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:21.186091   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:21.685899   31648 type.go:168] "Request Body" body=""
	I1003 18:09:21.686019   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:21.686347   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:22.185083   31648 type.go:168] "Request Body" body=""
	I1003 18:09:22.185175   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:22.185486   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:22.685245   31648 type.go:168] "Request Body" body=""
	I1003 18:09:22.685334   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:22.685730   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:23.185497   31648 type.go:168] "Request Body" body=""
	I1003 18:09:23.185562   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:23.185880   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:23.185935   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:23.685744   31648 type.go:168] "Request Body" body=""
	I1003 18:09:23.685811   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:23.686201   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:24.184964   31648 type.go:168] "Request Body" body=""
	I1003 18:09:24.185078   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:24.185397   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:24.449821   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:09:24.497529   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:09:24.499857   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:09:24.499886   31648 retry.go:31] will retry after 48.383287224s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:09:24.685468   31648 type.go:168] "Request Body" body=""
	I1003 18:09:24.685534   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:24.685867   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:25.185654   31648 type.go:168] "Request Body" body=""
	I1003 18:09:25.185748   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:25.186075   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:25.186127   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:25.685902   31648 type.go:168] "Request Body" body=""
	I1003 18:09:25.685999   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:25.686299   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:26.185018   31648 type.go:168] "Request Body" body=""
	I1003 18:09:26.185106   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:26.185414   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:26.685136   31648 type.go:168] "Request Body" body=""
	I1003 18:09:26.685216   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:26.685515   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:27.185253   31648 type.go:168] "Request Body" body=""
	I1003 18:09:27.185318   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:27.185650   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:27.685386   31648 type.go:168] "Request Body" body=""
	I1003 18:09:27.685451   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:27.685791   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:27.685845   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:28.185583   31648 type.go:168] "Request Body" body=""
	I1003 18:09:28.185675   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:28.186015   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:28.685836   31648 type.go:168] "Request Body" body=""
	I1003 18:09:28.685940   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:28.686317   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:29.185053   31648 type.go:168] "Request Body" body=""
	I1003 18:09:29.185118   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:29.185421   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:29.685145   31648 type.go:168] "Request Body" body=""
	I1003 18:09:29.685239   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:29.685545   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:30.085101   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:09:30.133826   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:09:30.136048   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:09:30.136077   31648 retry.go:31] will retry after 44.319890963s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:09:30.185379   31648 type.go:168] "Request Body" body=""
	I1003 18:09:30.185467   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:30.185752   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:30.185824   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:30.685605   31648 type.go:168] "Request Body" body=""
	I1003 18:09:30.685677   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:30.686026   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:31.185741   31648 type.go:168] "Request Body" body=""
	I1003 18:09:31.185821   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:31.186131   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:31.685990   31648 type.go:168] "Request Body" body=""
	I1003 18:09:31.686102   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:31.686418   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:32.185174   31648 type.go:168] "Request Body" body=""
	I1003 18:09:32.185268   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:32.185574   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:32.685346   31648 type.go:168] "Request Body" body=""
	I1003 18:09:32.685414   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:32.685749   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:32.685798   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:33.185523   31648 type.go:168] "Request Body" body=""
	I1003 18:09:33.185630   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:33.185973   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:33.685847   31648 type.go:168] "Request Body" body=""
	I1003 18:09:33.685917   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:33.686290   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:34.185044   31648 type.go:168] "Request Body" body=""
	I1003 18:09:34.185158   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:34.185479   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:34.685329   31648 type.go:168] "Request Body" body=""
	I1003 18:09:34.685395   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:34.685778   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:34.685850   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:35.185617   31648 type.go:168] "Request Body" body=""
	I1003 18:09:35.185711   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:35.186046   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:35.685845   31648 type.go:168] "Request Body" body=""
	I1003 18:09:35.685931   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:35.686261   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:36.184952   31648 type.go:168] "Request Body" body=""
	I1003 18:09:36.185036   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:36.185378   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:36.685083   31648 type.go:168] "Request Body" body=""
	I1003 18:09:36.685158   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:36.685526   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:37.185252   31648 type.go:168] "Request Body" body=""
	I1003 18:09:37.185333   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:37.185680   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:37.185740   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:37.685420   31648 type.go:168] "Request Body" body=""
	I1003 18:09:37.685494   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:37.685856   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:38.185680   31648 type.go:168] "Request Body" body=""
	I1003 18:09:38.185779   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:38.186105   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:38.685935   31648 type.go:168] "Request Body" body=""
	I1003 18:09:38.686035   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:38.686351   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:39.185118   31648 type.go:168] "Request Body" body=""
	I1003 18:09:39.185189   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:39.185487   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:39.685188   31648 type.go:168] "Request Body" body=""
	I1003 18:09:39.685265   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:39.685570   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:39.685631   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:40.185362   31648 type.go:168] "Request Body" body=""
	I1003 18:09:40.185457   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:40.185802   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:40.685609   31648 type.go:168] "Request Body" body=""
	I1003 18:09:40.685713   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:40.686101   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:41.186030   31648 type.go:168] "Request Body" body=""
	I1003 18:09:41.186101   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:41.186433   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:41.685075   31648 type.go:168] "Request Body" body=""
	I1003 18:09:41.685142   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:41.685469   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:42.185193   31648 type.go:168] "Request Body" body=""
	I1003 18:09:42.185257   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:42.185565   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:42.185630   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:42.685077   31648 type.go:168] "Request Body" body=""
	I1003 18:09:42.685172   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:42.685483   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:43.185219   31648 type.go:168] "Request Body" body=""
	I1003 18:09:43.185289   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:43.185605   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:43.685108   31648 type.go:168] "Request Body" body=""
	I1003 18:09:43.685175   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:43.685496   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:44.185214   31648 type.go:168] "Request Body" body=""
	I1003 18:09:44.185314   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:44.185626   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:44.185696   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:44.685443   31648 type.go:168] "Request Body" body=""
	I1003 18:09:44.685535   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:44.685860   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:45.185669   31648 type.go:168] "Request Body" body=""
	I1003 18:09:45.185734   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:45.186050   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:45.685869   31648 type.go:168] "Request Body" body=""
	I1003 18:09:45.685940   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:45.686258   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:46.184960   31648 type.go:168] "Request Body" body=""
	I1003 18:09:46.185084   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:46.185423   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:46.685149   31648 type.go:168] "Request Body" body=""
	I1003 18:09:46.685219   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:46.685543   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:46.685599   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:47.185302   31648 type.go:168] "Request Body" body=""
	I1003 18:09:47.185370   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:47.185710   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:47.685432   31648 type.go:168] "Request Body" body=""
	I1003 18:09:47.685496   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:47.685808   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:48.185599   31648 type.go:168] "Request Body" body=""
	I1003 18:09:48.185663   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:48.186043   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:48.685839   31648 type.go:168] "Request Body" body=""
	I1003 18:09:48.685931   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:48.686255   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:48.686305   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:49.185022   31648 type.go:168] "Request Body" body=""
	I1003 18:09:49.185091   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:49.185409   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:49.685097   31648 type.go:168] "Request Body" body=""
	I1003 18:09:49.685189   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:49.685510   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:50.185245   31648 type.go:168] "Request Body" body=""
	I1003 18:09:50.185317   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:50.185675   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:50.685396   31648 type.go:168] "Request Body" body=""
	I1003 18:09:50.685460   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:50.685814   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:51.185668   31648 type.go:168] "Request Body" body=""
	I1003 18:09:51.185757   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:51.186064   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:51.186116   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:51.685866   31648 type.go:168] "Request Body" body=""
	I1003 18:09:51.685934   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:51.686277   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:52.185003   31648 type.go:168] "Request Body" body=""
	I1003 18:09:52.185067   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:52.185368   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:52.685121   31648 type.go:168] "Request Body" body=""
	I1003 18:09:52.685219   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:52.685573   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:53.185280   31648 type.go:168] "Request Body" body=""
	I1003 18:09:53.185339   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:53.185633   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:53.685331   31648 type.go:168] "Request Body" body=""
	I1003 18:09:53.685395   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:53.685759   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:53.685836   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:54.185620   31648 type.go:168] "Request Body" body=""
	I1003 18:09:54.185691   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:54.186007   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:54.685714   31648 type.go:168] "Request Body" body=""
	I1003 18:09:54.685778   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:54.686135   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:55.185951   31648 type.go:168] "Request Body" body=""
	I1003 18:09:55.186058   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:55.186387   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:55.685101   31648 type.go:168] "Request Body" body=""
	I1003 18:09:55.685193   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:55.685564   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:56.185405   31648 type.go:168] "Request Body" body=""
	I1003 18:09:56.185491   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:56.185823   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:56.185874   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:56.685614   31648 type.go:168] "Request Body" body=""
	I1003 18:09:56.685702   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:56.686026   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:57.185904   31648 type.go:168] "Request Body" body=""
	I1003 18:09:57.186000   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:57.186336   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:57.685087   31648 type.go:168] "Request Body" body=""
	I1003 18:09:57.685160   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:57.685447   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:58.185160   31648 type.go:168] "Request Body" body=""
	I1003 18:09:58.185246   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:58.185558   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:58.685303   31648 type.go:168] "Request Body" body=""
	I1003 18:09:58.685365   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:58.685671   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:09:58.685755   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:09:59.185446   31648 type.go:168] "Request Body" body=""
	I1003 18:09:59.185545   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:59.185914   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:09:59.685737   31648 type.go:168] "Request Body" body=""
	I1003 18:09:59.685801   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:09:59.686146   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:00.185972   31648 type.go:168] "Request Body" body=""
	I1003 18:10:00.186075   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:00.186364   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:00.685077   31648 type.go:168] "Request Body" body=""
	I1003 18:10:00.685166   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:00.685464   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:01.185382   31648 type.go:168] "Request Body" body=""
	I1003 18:10:01.185446   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:01.185778   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:01.185830   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:01.685606   31648 type.go:168] "Request Body" body=""
	I1003 18:10:01.685677   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:01.686032   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:02.185907   31648 type.go:168] "Request Body" body=""
	I1003 18:10:02.186020   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:02.186378   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:02.685091   31648 type.go:168] "Request Body" body=""
	I1003 18:10:02.685152   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:02.685445   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:03.185142   31648 type.go:168] "Request Body" body=""
	I1003 18:10:03.185225   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:03.185561   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:03.685236   31648 type.go:168] "Request Body" body=""
	I1003 18:10:03.685339   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:03.685634   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:03.685696   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:04.185365   31648 type.go:168] "Request Body" body=""
	I1003 18:10:04.185433   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:04.185727   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:04.685562   31648 type.go:168] "Request Body" body=""
	I1003 18:10:04.685630   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:04.686027   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:05.185808   31648 type.go:168] "Request Body" body=""
	I1003 18:10:05.185875   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:05.186210   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:05.686012   31648 type.go:168] "Request Body" body=""
	I1003 18:10:05.686094   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:05.686420   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:05.686513   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:06.185220   31648 type.go:168] "Request Body" body=""
	I1003 18:10:06.185317   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:06.185670   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:06.685370   31648 type.go:168] "Request Body" body=""
	I1003 18:10:06.685434   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:06.685727   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:07.185434   31648 type.go:168] "Request Body" body=""
	I1003 18:10:07.185512   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:07.185878   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:07.685679   31648 type.go:168] "Request Body" body=""
	I1003 18:10:07.685748   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:07.686309   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:08.185067   31648 type.go:168] "Request Body" body=""
	I1003 18:10:08.185137   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:08.185459   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:08.185516   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:08.685191   31648 type.go:168] "Request Body" body=""
	I1003 18:10:08.685261   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:08.685582   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:09.185329   31648 type.go:168] "Request Body" body=""
	I1003 18:10:09.185397   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:09.185705   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:09.685441   31648 type.go:168] "Request Body" body=""
	I1003 18:10:09.685504   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:09.685840   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:10.185620   31648 type.go:168] "Request Body" body=""
	I1003 18:10:10.185689   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:10.186037   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:10.186087   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:10.685838   31648 type.go:168] "Request Body" body=""
	I1003 18:10:10.685914   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:10.686280   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:11.184954   31648 type.go:168] "Request Body" body=""
	I1003 18:10:11.185044   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:11.185353   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:11.685099   31648 type.go:168] "Request Body" body=""
	I1003 18:10:11.685168   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:11.685473   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:12.185192   31648 type.go:168] "Request Body" body=""
	I1003 18:10:12.185259   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:12.185564   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:12.685315   31648 type.go:168] "Request Body" body=""
	I1003 18:10:12.685386   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:12.685819   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:12.685875   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:12.884184   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:10:12.932382   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:10:12.934859   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:10:12.935018   31648 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1003 18:10:13.185242   31648 type.go:168] "Request Body" body=""
	I1003 18:10:13.185310   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:13.185617   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:13.685328   31648 type.go:168] "Request Body" body=""
	I1003 18:10:13.685430   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:13.685917   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:14.185730   31648 type.go:168] "Request Body" body=""
	I1003 18:10:14.185796   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:14.186122   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:14.456560   31648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:10:14.507486   31648 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:10:14.509939   31648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:10:14.510064   31648 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1003 18:10:14.512677   31648 out.go:179] * Enabled addons: 
	I1003 18:10:14.514281   31648 addons.go:514] duration metric: took 1m59.941954445s for enable addons: enabled=[]
	I1003 18:10:14.685449   31648 type.go:168] "Request Body" body=""
	I1003 18:10:14.685516   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:14.685857   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:14.685919   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:15.185675   31648 type.go:168] "Request Body" body=""
	I1003 18:10:15.185738   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:15.186060   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:15.685871   31648 type.go:168] "Request Body" body=""
	I1003 18:10:15.685938   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:15.686263   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:16.184928   31648 type.go:168] "Request Body" body=""
	I1003 18:10:16.185033   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:16.185365   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:16.685082   31648 type.go:168] "Request Body" body=""
	I1003 18:10:16.685144   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:16.685447   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:17.185125   31648 type.go:168] "Request Body" body=""
	I1003 18:10:17.185202   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:17.185514   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:17.185563   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:17.685251   31648 type.go:168] "Request Body" body=""
	I1003 18:10:17.685320   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:17.685625   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:18.185367   31648 type.go:168] "Request Body" body=""
	I1003 18:10:18.185448   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:18.185805   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:18.685631   31648 type.go:168] "Request Body" body=""
	I1003 18:10:18.685706   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:18.686092   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:19.185904   31648 type.go:168] "Request Body" body=""
	I1003 18:10:19.185995   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:19.186318   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:19.186371   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:19.685092   31648 type.go:168] "Request Body" body=""
	I1003 18:10:19.685164   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:19.685487   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:20.185213   31648 type.go:168] "Request Body" body=""
	I1003 18:10:20.185296   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:20.185633   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:20.685398   31648 type.go:168] "Request Body" body=""
	I1003 18:10:20.685475   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:20.685780   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:21.185636   31648 type.go:168] "Request Body" body=""
	I1003 18:10:21.185711   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:21.186047   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:21.685810   31648 type.go:168] "Request Body" body=""
	I1003 18:10:21.685874   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:21.686211   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:21.686273   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:22.184932   31648 type.go:168] "Request Body" body=""
	I1003 18:10:22.185016   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:22.185357   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:22.685073   31648 type.go:168] "Request Body" body=""
	I1003 18:10:22.685138   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:22.685450   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:23.185168   31648 type.go:168] "Request Body" body=""
	I1003 18:10:23.185239   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:23.185562   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:23.685280   31648 type.go:168] "Request Body" body=""
	I1003 18:10:23.685364   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:23.685684   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:24.185432   31648 type.go:168] "Request Body" body=""
	I1003 18:10:24.185494   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:24.185826   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:24.185890   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:24.685663   31648 type.go:168] "Request Body" body=""
	I1003 18:10:24.685735   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:24.686142   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:25.185900   31648 type.go:168] "Request Body" body=""
	I1003 18:10:25.185964   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:25.186274   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:25.685013   31648 type.go:168] "Request Body" body=""
	I1003 18:10:25.685093   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:25.685422   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:26.185213   31648 type.go:168] "Request Body" body=""
	I1003 18:10:26.185323   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:26.185654   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:26.685413   31648 type.go:168] "Request Body" body=""
	I1003 18:10:26.685482   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:26.685843   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:26.685908   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:27.185654   31648 type.go:168] "Request Body" body=""
	I1003 18:10:27.185733   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:27.186080   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:27.685901   31648 type.go:168] "Request Body" body=""
	I1003 18:10:27.685968   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:27.686301   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:28.185042   31648 type.go:168] "Request Body" body=""
	I1003 18:10:28.185109   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:28.185417   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:28.685129   31648 type.go:168] "Request Body" body=""
	I1003 18:10:28.685212   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:28.685544   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:29.185277   31648 type.go:168] "Request Body" body=""
	I1003 18:10:29.185350   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:29.185667   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:29.185717   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:29.685390   31648 type.go:168] "Request Body" body=""
	I1003 18:10:29.685463   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:29.685809   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:30.185653   31648 type.go:168] "Request Body" body=""
	I1003 18:10:30.185740   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:30.186077   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:30.685885   31648 type.go:168] "Request Body" body=""
	I1003 18:10:30.685950   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:30.686302   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:31.184960   31648 type.go:168] "Request Body" body=""
	I1003 18:10:31.185039   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:31.185351   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:31.685088   31648 type.go:168] "Request Body" body=""
	I1003 18:10:31.685183   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:31.685491   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:31.685553   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:32.185245   31648 type.go:168] "Request Body" body=""
	I1003 18:10:32.185311   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:32.185616   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:32.685334   31648 type.go:168] "Request Body" body=""
	I1003 18:10:32.685427   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:32.685753   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:33.185521   31648 type.go:168] "Request Body" body=""
	I1003 18:10:33.185585   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:33.185951   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:33.685776   31648 type.go:168] "Request Body" body=""
	I1003 18:10:33.685843   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:33.686164   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:33.686226   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:34.186008   31648 type.go:168] "Request Body" body=""
	I1003 18:10:34.186076   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:34.186390   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:34.685080   31648 type.go:168] "Request Body" body=""
	I1003 18:10:34.685151   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:34.685468   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:35.185199   31648 type.go:168] "Request Body" body=""
	I1003 18:10:35.185274   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:35.185624   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:35.685334   31648 type.go:168] "Request Body" body=""
	I1003 18:10:35.685407   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:35.685728   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:36.185543   31648 type.go:168] "Request Body" body=""
	I1003 18:10:36.185617   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:36.185950   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:36.186025   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:36.685767   31648 type.go:168] "Request Body" body=""
	I1003 18:10:36.685830   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:36.686160   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:37.185965   31648 type.go:168] "Request Body" body=""
	I1003 18:10:37.186062   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:37.186419   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:37.685168   31648 type.go:168] "Request Body" body=""
	I1003 18:10:37.685233   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:37.685563   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:38.185271   31648 type.go:168] "Request Body" body=""
	I1003 18:10:38.185345   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:38.185657   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:38.685369   31648 type.go:168] "Request Body" body=""
	I1003 18:10:38.685433   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:38.685746   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:38.685800   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:39.185560   31648 type.go:168] "Request Body" body=""
	I1003 18:10:39.185640   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:39.185997   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:39.685784   31648 type.go:168] "Request Body" body=""
	I1003 18:10:39.685851   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:39.686184   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:40.185949   31648 type.go:168] "Request Body" body=""
	I1003 18:10:40.186046   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:40.186401   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:40.685071   31648 type.go:168] "Request Body" body=""
	I1003 18:10:40.685152   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:40.685459   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:41.185267   31648 type.go:168] "Request Body" body=""
	I1003 18:10:41.185334   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:41.185637   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:41.185700   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:41.685380   31648 type.go:168] "Request Body" body=""
	I1003 18:10:41.685445   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:41.685830   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:42.185632   31648 type.go:168] "Request Body" body=""
	I1003 18:10:42.185724   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:42.186063   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:42.685859   31648 type.go:168] "Request Body" body=""
	I1003 18:10:42.685933   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:42.686273   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:43.185018   31648 type.go:168] "Request Body" body=""
	I1003 18:10:43.185089   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:43.185411   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:43.685086   31648 type.go:168] "Request Body" body=""
	I1003 18:10:43.685152   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:43.685478   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:43.685542   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:44.185259   31648 type.go:168] "Request Body" body=""
	I1003 18:10:44.185327   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:44.185679   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:44.685473   31648 type.go:168] "Request Body" body=""
	I1003 18:10:44.685537   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:44.685872   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:45.185684   31648 type.go:168] "Request Body" body=""
	I1003 18:10:45.185759   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:45.186086   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:45.685880   31648 type.go:168] "Request Body" body=""
	I1003 18:10:45.685945   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:45.686284   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:45.686349   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:46.184919   31648 type.go:168] "Request Body" body=""
	I1003 18:10:46.185021   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:46.185345   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:46.685069   31648 type.go:168] "Request Body" body=""
	I1003 18:10:46.685147   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:46.685496   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:47.185204   31648 type.go:168] "Request Body" body=""
	I1003 18:10:47.185304   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:47.185613   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:47.685395   31648 type.go:168] "Request Body" body=""
	I1003 18:10:47.685473   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:47.685844   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:48.185624   31648 type.go:168] "Request Body" body=""
	I1003 18:10:48.185707   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:48.186048   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:48.186105   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:48.685861   31648 type.go:168] "Request Body" body=""
	I1003 18:10:48.685948   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:48.686324   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:49.185066   31648 type.go:168] "Request Body" body=""
	I1003 18:10:49.185176   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:49.185503   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:49.685237   31648 type.go:168] "Request Body" body=""
	I1003 18:10:49.685317   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:49.685703   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:50.185462   31648 type.go:168] "Request Body" body=""
	I1003 18:10:50.185540   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:50.185875   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:50.685684   31648 type.go:168] "Request Body" body=""
	I1003 18:10:50.685764   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:50.686154   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:50.686209   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:51.185959   31648 type.go:168] "Request Body" body=""
	I1003 18:10:51.186061   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:51.186411   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:51.685154   31648 type.go:168] "Request Body" body=""
	I1003 18:10:51.685222   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:51.685506   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:52.185254   31648 type.go:168] "Request Body" body=""
	I1003 18:10:52.185335   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:52.185690   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:52.685398   31648 type.go:168] "Request Body" body=""
	I1003 18:10:52.685466   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:52.685770   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:53.185621   31648 type.go:168] "Request Body" body=""
	I1003 18:10:53.185692   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:53.186039   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:53.186109   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:53.685850   31648 type.go:168] "Request Body" body=""
	I1003 18:10:53.685914   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:53.686255   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:54.185017   31648 type.go:168] "Request Body" body=""
	I1003 18:10:54.185080   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:54.185397   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:54.685078   31648 type.go:168] "Request Body" body=""
	I1003 18:10:54.685145   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:54.685459   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:55.185159   31648 type.go:168] "Request Body" body=""
	I1003 18:10:55.185227   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:55.185528   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:55.685211   31648 type.go:168] "Request Body" body=""
	I1003 18:10:55.685279   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:55.685586   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:55.685652   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:56.185352   31648 type.go:168] "Request Body" body=""
	I1003 18:10:56.185430   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:56.185759   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:56.685531   31648 type.go:168] "Request Body" body=""
	I1003 18:10:56.685600   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:56.685922   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:57.185723   31648 type.go:168] "Request Body" body=""
	I1003 18:10:57.185811   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:57.186156   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:57.685922   31648 type.go:168] "Request Body" body=""
	I1003 18:10:57.686010   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:57.686316   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:10:57.686367   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:10:58.185097   31648 type.go:168] "Request Body" body=""
	I1003 18:10:58.185187   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:58.185535   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:58.685089   31648 type.go:168] "Request Body" body=""
	I1003 18:10:58.685158   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:58.685458   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:59.185180   31648 type.go:168] "Request Body" body=""
	I1003 18:10:59.185260   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:59.185605   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:10:59.685329   31648 type.go:168] "Request Body" body=""
	I1003 18:10:59.685409   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:10:59.685768   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:00.185577   31648 type.go:168] "Request Body" body=""
	I1003 18:11:00.185644   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:00.185968   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:00.186053   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:00.685767   31648 type.go:168] "Request Body" body=""
	I1003 18:11:00.685853   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:00.686208   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:01.185912   31648 type.go:168] "Request Body" body=""
	I1003 18:11:01.186001   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:01.186311   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:01.685060   31648 type.go:168] "Request Body" body=""
	I1003 18:11:01.685173   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:01.685511   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:02.185272   31648 type.go:168] "Request Body" body=""
	I1003 18:11:02.185343   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:02.185674   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:02.685366   31648 type.go:168] "Request Body" body=""
	I1003 18:11:02.685447   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:02.685807   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:02.685860   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:03.185586   31648 type.go:168] "Request Body" body=""
	I1003 18:11:03.185653   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:03.186010   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:03.685810   31648 type.go:168] "Request Body" body=""
	I1003 18:11:03.685892   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:03.686241   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:04.184939   31648 type.go:168] "Request Body" body=""
	I1003 18:11:04.185023   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:04.185312   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:04.685060   31648 type.go:168] "Request Body" body=""
	I1003 18:11:04.685143   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:04.685467   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:05.185189   31648 type.go:168] "Request Body" body=""
	I1003 18:11:05.185258   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:05.185567   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:05.185625   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:05.685299   31648 type.go:168] "Request Body" body=""
	I1003 18:11:05.685378   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:05.685703   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:06.185511   31648 type.go:168] "Request Body" body=""
	I1003 18:11:06.185600   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:06.185915   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:06.685750   31648 type.go:168] "Request Body" body=""
	I1003 18:11:06.685834   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:06.686186   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:07.185989   31648 type.go:168] "Request Body" body=""
	I1003 18:11:07.186058   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:07.186369   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:07.186436   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:07.685126   31648 type.go:168] "Request Body" body=""
	I1003 18:11:07.685203   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:07.685514   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:08.185223   31648 type.go:168] "Request Body" body=""
	I1003 18:11:08.185315   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:08.185627   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:08.685356   31648 type.go:168] "Request Body" body=""
	I1003 18:11:08.685469   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:08.685819   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:09.185588   31648 type.go:168] "Request Body" body=""
	I1003 18:11:09.185655   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:09.186048   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:09.685858   31648 type.go:168] "Request Body" body=""
	I1003 18:11:09.685945   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:09.686291   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:09.686344   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:10.185028   31648 type.go:168] "Request Body" body=""
	I1003 18:11:10.185112   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:10.185419   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:10.685125   31648 type.go:168] "Request Body" body=""
	I1003 18:11:10.685235   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:10.685580   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:11.185333   31648 type.go:168] "Request Body" body=""
	I1003 18:11:11.185400   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:11.185721   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:11.685427   31648 type.go:168] "Request Body" body=""
	I1003 18:11:11.685540   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:11.685876   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:12.185659   31648 type.go:168] "Request Body" body=""
	I1003 18:11:12.185756   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:12.186078   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:12.186142   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:12.685887   31648 type.go:168] "Request Body" body=""
	I1003 18:11:12.685959   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:12.686282   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:13.185003   31648 type.go:168] "Request Body" body=""
	I1003 18:11:13.185081   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:13.185409   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:13.685094   31648 type.go:168] "Request Body" body=""
	I1003 18:11:13.685164   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:13.685478   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:14.185184   31648 type.go:168] "Request Body" body=""
	I1003 18:11:14.185260   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:14.185598   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:14.685408   31648 type.go:168] "Request Body" body=""
	I1003 18:11:14.685477   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:14.685794   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:14.685865   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:15.185614   31648 type.go:168] "Request Body" body=""
	I1003 18:11:15.185690   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:15.186097   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:15.685915   31648 type.go:168] "Request Body" body=""
	I1003 18:11:15.686020   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:15.686331   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:16.185164   31648 type.go:168] "Request Body" body=""
	I1003 18:11:16.185233   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:16.185540   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:16.685230   31648 type.go:168] "Request Body" body=""
	I1003 18:11:16.685290   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:16.685601   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:17.185312   31648 type.go:168] "Request Body" body=""
	I1003 18:11:17.185380   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:17.185697   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:17.185779   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:17.685436   31648 type.go:168] "Request Body" body=""
	I1003 18:11:17.685502   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:17.685845   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:18.185654   31648 type.go:168] "Request Body" body=""
	I1003 18:11:18.185717   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:18.186072   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:18.685861   31648 type.go:168] "Request Body" body=""
	I1003 18:11:18.685924   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:18.686240   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:19.185000   31648 type.go:168] "Request Body" body=""
	I1003 18:11:19.185076   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:19.185392   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:19.685130   31648 type.go:168] "Request Body" body=""
	I1003 18:11:19.685199   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:19.685540   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:19.685603   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:20.185304   31648 type.go:168] "Request Body" body=""
	I1003 18:11:20.185368   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:20.185692   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:20.685437   31648 type.go:168] "Request Body" body=""
	I1003 18:11:20.685512   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:20.685889   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:21.185654   31648 type.go:168] "Request Body" body=""
	I1003 18:11:21.185736   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:21.186088   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:21.685864   31648 type.go:168] "Request Body" body=""
	I1003 18:11:21.685950   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:21.686257   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:21.686310   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:22.185029   31648 type.go:168] "Request Body" body=""
	I1003 18:11:22.185128   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:22.185448   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:22.685177   31648 type.go:168] "Request Body" body=""
	I1003 18:11:22.685257   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:22.685561   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:23.185277   31648 type.go:168] "Request Body" body=""
	I1003 18:11:23.185353   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:23.185666   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:23.685362   31648 type.go:168] "Request Body" body=""
	I1003 18:11:23.685435   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:23.685751   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:24.185475   31648 type.go:168] "Request Body" body=""
	I1003 18:11:24.185552   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:24.185910   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:24.185963   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:24.685584   31648 type.go:168] "Request Body" body=""
	I1003 18:11:24.685659   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:24.685971   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:25.185758   31648 type.go:168] "Request Body" body=""
	I1003 18:11:25.185842   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:25.186204   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:25.685956   31648 type.go:168] "Request Body" body=""
	I1003 18:11:25.686040   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:25.686348   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:26.185071   31648 type.go:168] "Request Body" body=""
	I1003 18:11:26.185144   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:26.185483   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:26.685189   31648 type.go:168] "Request Body" body=""
	I1003 18:11:26.685255   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:26.685555   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:26.685624   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:27.185293   31648 type.go:168] "Request Body" body=""
	I1003 18:11:27.185364   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:27.185670   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:27.685353   31648 type.go:168] "Request Body" body=""
	I1003 18:11:27.685417   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:27.685713   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:28.185462   31648 type.go:168] "Request Body" body=""
	I1003 18:11:28.185529   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:28.185838   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:28.685636   31648 type.go:168] "Request Body" body=""
	I1003 18:11:28.685711   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:28.686033   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:28.686095   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:29.185891   31648 type.go:168] "Request Body" body=""
	I1003 18:11:29.185959   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:29.186289   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:29.684999   31648 type.go:168] "Request Body" body=""
	I1003 18:11:29.685063   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:29.685358   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:30.185079   31648 type.go:168] "Request Body" body=""
	I1003 18:11:30.185147   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:30.185448   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:30.685153   31648 type.go:168] "Request Body" body=""
	I1003 18:11:30.685224   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:30.685542   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:31.185387   31648 type.go:168] "Request Body" body=""
	I1003 18:11:31.185470   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:31.185801   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:31.185869   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:31.685601   31648 type.go:168] "Request Body" body=""
	I1003 18:11:31.685665   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:31.686013   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:32.185823   31648 type.go:168] "Request Body" body=""
	I1003 18:11:32.185918   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:32.186314   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:32.685025   31648 type.go:168] "Request Body" body=""
	I1003 18:11:32.685090   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:32.685396   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:33.185093   31648 type.go:168] "Request Body" body=""
	I1003 18:11:33.185177   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:33.185492   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:33.685174   31648 type.go:168] "Request Body" body=""
	I1003 18:11:33.685294   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:33.685598   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:33.685653   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:34.185347   31648 type.go:168] "Request Body" body=""
	I1003 18:11:34.185424   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:34.185757   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:34.685584   31648 type.go:168] "Request Body" body=""
	I1003 18:11:34.685700   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:34.686040   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:35.185805   31648 type.go:168] "Request Body" body=""
	I1003 18:11:35.185867   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:35.186199   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:35.685954   31648 type.go:168] "Request Body" body=""
	I1003 18:11:35.686050   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:35.686359   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:35.686411   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:36.185172   31648 type.go:168] "Request Body" body=""
	I1003 18:11:36.185238   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:36.185535   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:36.685215   31648 type.go:168] "Request Body" body=""
	I1003 18:11:36.685302   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:36.685612   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:37.185339   31648 type.go:168] "Request Body" body=""
	I1003 18:11:37.185403   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:37.185728   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:37.685401   31648 type.go:168] "Request Body" body=""
	I1003 18:11:37.685477   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:37.685800   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:38.185642   31648 type.go:168] "Request Body" body=""
	I1003 18:11:38.185720   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:38.186056   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:38.186115   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:38.685846   31648 type.go:168] "Request Body" body=""
	I1003 18:11:38.685908   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:38.686230   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:39.184965   31648 type.go:168] "Request Body" body=""
	I1003 18:11:39.185068   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:39.185389   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:39.685076   31648 type.go:168] "Request Body" body=""
	I1003 18:11:39.685138   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:39.685429   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:40.185151   31648 type.go:168] "Request Body" body=""
	I1003 18:11:40.185227   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:40.185552   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:40.685234   31648 type.go:168] "Request Body" body=""
	I1003 18:11:40.685299   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:40.685612   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:40.685679   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:41.185407   31648 type.go:168] "Request Body" body=""
	I1003 18:11:41.185475   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:41.185810   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:41.685588   31648 type.go:168] "Request Body" body=""
	I1003 18:11:41.685663   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:41.685999   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:42.185821   31648 type.go:168] "Request Body" body=""
	I1003 18:11:42.185909   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:42.186287   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:42.685035   31648 type.go:168] "Request Body" body=""
	I1003 18:11:42.685109   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:42.685460   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:43.185163   31648 type.go:168] "Request Body" body=""
	I1003 18:11:43.185226   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:43.185569   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:43.185640   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:43.685320   31648 type.go:168] "Request Body" body=""
	I1003 18:11:43.685387   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:43.685687   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:44.185376   31648 type.go:168] "Request Body" body=""
	I1003 18:11:44.185445   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:44.185795   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:44.685599   31648 type.go:168] "Request Body" body=""
	I1003 18:11:44.685672   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:44.686013   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:45.185797   31648 type.go:168] "Request Body" body=""
	I1003 18:11:45.185863   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:45.186210   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:45.186272   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:45.684943   31648 type.go:168] "Request Body" body=""
	I1003 18:11:45.685023   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:45.685323   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:46.184972   31648 type.go:168] "Request Body" body=""
	I1003 18:11:46.185063   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:46.185368   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:46.685078   31648 type.go:168] "Request Body" body=""
	I1003 18:11:46.685143   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:46.685436   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:47.185171   31648 type.go:168] "Request Body" body=""
	I1003 18:11:47.185237   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:47.185530   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:47.685229   31648 type.go:168] "Request Body" body=""
	I1003 18:11:47.685292   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:47.685573   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:47.685625   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:48.185308   31648 type.go:168] "Request Body" body=""
	I1003 18:11:48.185378   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:48.185726   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:48.685435   31648 type.go:168] "Request Body" body=""
	I1003 18:11:48.685502   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:48.685818   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:49.185572   31648 type.go:168] "Request Body" body=""
	I1003 18:11:49.185639   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:49.185951   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:49.685755   31648 type.go:168] "Request Body" body=""
	I1003 18:11:49.685820   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:49.686165   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:49.686226   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:50.185972   31648 type.go:168] "Request Body" body=""
	I1003 18:11:50.186049   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:50.186347   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:50.685077   31648 type.go:168] "Request Body" body=""
	I1003 18:11:50.685149   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:50.685487   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:51.185355   31648 type.go:168] "Request Body" body=""
	I1003 18:11:51.185423   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:51.185749   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:51.685438   31648 type.go:168] "Request Body" body=""
	I1003 18:11:51.685502   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:51.685808   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:52.185581   31648 type.go:168] "Request Body" body=""
	I1003 18:11:52.185644   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:52.185967   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:52.186043   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:52.685763   31648 type.go:168] "Request Body" body=""
	I1003 18:11:52.685866   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:52.686218   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:53.184953   31648 type.go:168] "Request Body" body=""
	I1003 18:11:53.185051   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:53.185365   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:53.685069   31648 type.go:168] "Request Body" body=""
	I1003 18:11:53.685143   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:53.685457   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:54.185161   31648 type.go:168] "Request Body" body=""
	I1003 18:11:54.185226   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:54.185562   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:54.685310   31648 type.go:168] "Request Body" body=""
	I1003 18:11:54.685387   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:54.685726   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:54.685776   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:55.185417   31648 type.go:168] "Request Body" body=""
	I1003 18:11:55.185483   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:55.185815   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:55.685573   31648 type.go:168] "Request Body" body=""
	I1003 18:11:55.685677   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:55.686027   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:56.185731   31648 type.go:168] "Request Body" body=""
	I1003 18:11:56.185792   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:56.186116   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:56.685906   31648 type.go:168] "Request Body" body=""
	I1003 18:11:56.686004   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:56.686321   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:56.686379   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:57.185067   31648 type.go:168] "Request Body" body=""
	I1003 18:11:57.185134   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:57.185426   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:57.685144   31648 type.go:168] "Request Body" body=""
	I1003 18:11:57.685226   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:57.685539   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:58.185226   31648 type.go:168] "Request Body" body=""
	I1003 18:11:58.185291   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:58.185597   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:58.685288   31648 type.go:168] "Request Body" body=""
	I1003 18:11:58.685373   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:58.685689   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:11:59.185369   31648 type.go:168] "Request Body" body=""
	I1003 18:11:59.185441   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:59.185768   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:11:59.185831   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:11:59.685575   31648 type.go:168] "Request Body" body=""
	I1003 18:11:59.685674   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:11:59.686024   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:00.185851   31648 type.go:168] "Request Body" body=""
	I1003 18:12:00.185922   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:00.186234   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:00.684953   31648 type.go:168] "Request Body" body=""
	I1003 18:12:00.685062   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:00.685403   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:01.185179   31648 type.go:168] "Request Body" body=""
	I1003 18:12:01.185248   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:01.185572   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:01.685293   31648 type.go:168] "Request Body" body=""
	I1003 18:12:01.685376   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:01.685710   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:01.685766   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:02.185411   31648 type.go:168] "Request Body" body=""
	I1003 18:12:02.185478   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:02.185826   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:02.685596   31648 type.go:168] "Request Body" body=""
	I1003 18:12:02.685688   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:02.686031   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:03.185821   31648 type.go:168] "Request Body" body=""
	I1003 18:12:03.185887   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:03.186235   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:03.684941   31648 type.go:168] "Request Body" body=""
	I1003 18:12:03.685043   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:03.685366   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:04.185065   31648 type.go:168] "Request Body" body=""
	I1003 18:12:04.185133   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:04.185448   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:04.185500   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:04.685256   31648 type.go:168] "Request Body" body=""
	I1003 18:12:04.685332   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:04.685650   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:05.185329   31648 type.go:168] "Request Body" body=""
	I1003 18:12:05.185398   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:05.185718   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:05.685410   31648 type.go:168] "Request Body" body=""
	I1003 18:12:05.685475   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:05.685794   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:06.185563   31648 type.go:168] "Request Body" body=""
	I1003 18:12:06.185632   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:06.185948   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:06.186035   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:06.685752   31648 type.go:168] "Request Body" body=""
	I1003 18:12:06.685824   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:06.686177   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:07.185942   31648 type.go:168] "Request Body" body=""
	I1003 18:12:07.186020   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:07.186318   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:07.685031   31648 type.go:168] "Request Body" body=""
	I1003 18:12:07.685100   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:07.685424   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:08.185310   31648 type.go:168] "Request Body" body=""
	I1003 18:12:08.185557   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:08.186174   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:08.186246   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:08.685021   31648 type.go:168] "Request Body" body=""
	I1003 18:12:08.685163   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:08.685624   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:09.185153   31648 type.go:168] "Request Body" body=""
	I1003 18:12:09.185228   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:09.185529   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:09.685080   31648 type.go:168] "Request Body" body=""
	I1003 18:12:09.685150   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:09.685445   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:10.185696   31648 type.go:168] "Request Body" body=""
	I1003 18:12:10.185761   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:10.186171   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:10.685822   31648 type.go:168] "Request Body" body=""
	I1003 18:12:10.685891   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:10.686201   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:10.686266   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:11.184920   31648 type.go:168] "Request Body" body=""
	I1003 18:12:11.185025   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:11.185378   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:11.684920   31648 type.go:168] "Request Body" body=""
	I1003 18:12:11.685033   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:11.685353   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:12.186032   31648 type.go:168] "Request Body" body=""
	I1003 18:12:12.186096   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:12.186405   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:12.685015   31648 type.go:168] "Request Body" body=""
	I1003 18:12:12.685091   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:12.685409   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:13.185019   31648 type.go:168] "Request Body" body=""
	I1003 18:12:13.185093   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:13.185404   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:13.185456   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:13.685017   31648 type.go:168] "Request Body" body=""
	I1003 18:12:13.685098   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:13.685420   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:14.185003   31648 type.go:168] "Request Body" body=""
	I1003 18:12:14.185073   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:14.185375   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:14.685353   31648 type.go:168] "Request Body" body=""
	I1003 18:12:14.685425   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:14.685732   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:15.185329   31648 type.go:168] "Request Body" body=""
	I1003 18:12:15.185393   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:15.185699   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:15.185756   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:15.685287   31648 type.go:168] "Request Body" body=""
	I1003 18:12:15.685366   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:15.685696   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:16.185545   31648 type.go:168] "Request Body" body=""
	I1003 18:12:16.185614   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:16.185938   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:16.685555   31648 type.go:168] "Request Body" body=""
	I1003 18:12:16.685672   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:16.686031   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:17.185708   31648 type.go:168] "Request Body" body=""
	I1003 18:12:17.185775   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:17.186072   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:17.186122   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:17.685745   31648 type.go:168] "Request Body" body=""
	I1003 18:12:17.685826   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:17.686169   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:18.185895   31648 type.go:168] "Request Body" body=""
	I1003 18:12:18.185966   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:18.186347   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:18.685985   31648 type.go:168] "Request Body" body=""
	I1003 18:12:18.686065   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:18.686377   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:19.185028   31648 type.go:168] "Request Body" body=""
	I1003 18:12:19.185094   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:19.185404   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:19.684993   31648 type.go:168] "Request Body" body=""
	I1003 18:12:19.685067   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:19.685365   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:19.685419   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:20.184966   31648 type.go:168] "Request Body" body=""
	I1003 18:12:20.185059   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:20.185369   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:20.684968   31648 type.go:168] "Request Body" body=""
	I1003 18:12:20.685064   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:20.685377   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:21.185199   31648 type.go:168] "Request Body" body=""
	I1003 18:12:21.185268   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:21.185584   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:21.685182   31648 type.go:168] "Request Body" body=""
	I1003 18:12:21.685270   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:21.685589   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:21.685651   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:22.185158   31648 type.go:168] "Request Body" body=""
	I1003 18:12:22.185226   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:22.185552   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:22.685092   31648 type.go:168] "Request Body" body=""
	I1003 18:12:22.685168   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:22.685483   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:23.185069   31648 type.go:168] "Request Body" body=""
	I1003 18:12:23.185132   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:23.185442   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:23.685074   31648 type.go:168] "Request Body" body=""
	I1003 18:12:23.685147   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:23.685472   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:24.185079   31648 type.go:168] "Request Body" body=""
	I1003 18:12:24.185152   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:24.185468   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:24.185523   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:24.685267   31648 type.go:168] "Request Body" body=""
	I1003 18:12:24.685328   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:24.685633   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:25.185201   31648 type.go:168] "Request Body" body=""
	I1003 18:12:25.185267   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:25.185577   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:25.685147   31648 type.go:168] "Request Body" body=""
	I1003 18:12:25.685221   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:25.685537   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:26.185376   31648 type.go:168] "Request Body" body=""
	I1003 18:12:26.185445   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:26.185763   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:26.185815   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:26.685320   31648 type.go:168] "Request Body" body=""
	I1003 18:12:26.685398   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:26.685732   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:27.185386   31648 type.go:168] "Request Body" body=""
	I1003 18:12:27.185456   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:27.185774   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:27.685332   31648 type.go:168] "Request Body" body=""
	I1003 18:12:27.685409   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:27.685755   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:28.185323   31648 type.go:168] "Request Body" body=""
	I1003 18:12:28.185387   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:28.185709   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:28.685266   31648 type.go:168] "Request Body" body=""
	I1003 18:12:28.685343   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:28.685731   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:28.685797   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:29.185293   31648 type.go:168] "Request Body" body=""
	I1003 18:12:29.185362   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:29.185681   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:29.685253   31648 type.go:168] "Request Body" body=""
	I1003 18:12:29.685341   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:29.685670   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:30.185273   31648 type.go:168] "Request Body" body=""
	I1003 18:12:30.185336   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:30.185638   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:30.685205   31648 type.go:168] "Request Body" body=""
	I1003 18:12:30.685285   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:30.685586   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:31.185396   31648 type.go:168] "Request Body" body=""
	I1003 18:12:31.185471   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:31.185833   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:31.185890   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:31.685435   31648 type.go:168] "Request Body" body=""
	I1003 18:12:31.685517   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:31.685844   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:32.185392   31648 type.go:168] "Request Body" body=""
	I1003 18:12:32.185458   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:32.185764   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:32.685377   31648 type.go:168] "Request Body" body=""
	I1003 18:12:32.685464   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:32.685795   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:33.185359   31648 type.go:168] "Request Body" body=""
	I1003 18:12:33.185426   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:33.185740   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:33.685326   31648 type.go:168] "Request Body" body=""
	I1003 18:12:33.685407   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:33.685749   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:33.685805   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:34.185324   31648 type.go:168] "Request Body" body=""
	I1003 18:12:34.185391   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:34.185798   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:34.685697   31648 type.go:168] "Request Body" body=""
	I1003 18:12:34.685778   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:34.686147   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:35.185833   31648 type.go:168] "Request Body" body=""
	I1003 18:12:35.185908   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:35.186230   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:35.685876   31648 type.go:168] "Request Body" body=""
	I1003 18:12:35.685957   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:35.686342   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:35.686404   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:36.185025   31648 type.go:168] "Request Body" body=""
	I1003 18:12:36.185106   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:36.185455   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:36.685049   31648 type.go:168] "Request Body" body=""
	I1003 18:12:36.685129   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:36.685448   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:37.185016   31648 type.go:168] "Request Body" body=""
	I1003 18:12:37.185089   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:37.185408   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:37.685018   31648 type.go:168] "Request Body" body=""
	I1003 18:12:37.685090   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:37.685418   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:38.184968   31648 type.go:168] "Request Body" body=""
	I1003 18:12:38.185058   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:38.185369   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:38.185426   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:38.684922   31648 type.go:168] "Request Body" body=""
	I1003 18:12:38.685020   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:38.685336   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:39.186015   31648 type.go:168] "Request Body" body=""
	I1003 18:12:39.186082   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:39.186391   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:39.684964   31648 type.go:168] "Request Body" body=""
	I1003 18:12:39.685064   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:39.685384   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:40.185016   31648 type.go:168] "Request Body" body=""
	I1003 18:12:40.185081   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:40.185399   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:40.185451   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:40.685023   31648 type.go:168] "Request Body" body=""
	I1003 18:12:40.685100   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:40.685415   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:41.185286   31648 type.go:168] "Request Body" body=""
	I1003 18:12:41.185356   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:41.185704   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:41.685271   31648 type.go:168] "Request Body" body=""
	I1003 18:12:41.685345   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:41.685676   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:42.185232   31648 type.go:168] "Request Body" body=""
	I1003 18:12:42.185297   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:42.185603   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:42.185677   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:42.685166   31648 type.go:168] "Request Body" body=""
	I1003 18:12:42.685261   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:42.685582   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:43.185142   31648 type.go:168] "Request Body" body=""
	I1003 18:12:43.185210   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:43.185530   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:43.685335   31648 type.go:168] "Request Body" body=""
	I1003 18:12:43.685517   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:43.686011   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:44.185546   31648 type.go:168] "Request Body" body=""
	I1003 18:12:44.185637   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:44.185952   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:44.186027   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:44.685689   31648 type.go:168] "Request Body" body=""
	I1003 18:12:44.685790   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:44.686111   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:45.185834   31648 type.go:168] "Request Body" body=""
	I1003 18:12:45.185923   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:45.186247   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:45.685720   31648 type.go:168] "Request Body" body=""
	I1003 18:12:45.685788   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:45.686128   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:46.185754   31648 type.go:168] "Request Body" body=""
	I1003 18:12:46.185839   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:46.186221   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:46.186277   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:46.685820   31648 type.go:168] "Request Body" body=""
	I1003 18:12:46.685886   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:46.686208   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:47.185851   31648 type.go:168] "Request Body" body=""
	I1003 18:12:47.185923   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:47.186245   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:47.685882   31648 type.go:168] "Request Body" body=""
	I1003 18:12:47.685947   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:47.686262   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:48.185908   31648 type.go:168] "Request Body" body=""
	I1003 18:12:48.185999   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:48.186381   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:48.186430   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:48.686002   31648 type.go:168] "Request Body" body=""
	I1003 18:12:48.686088   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:48.686447   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:49.185029   31648 type.go:168] "Request Body" body=""
	I1003 18:12:49.185102   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:49.185407   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:49.685003   31648 type.go:168] "Request Body" body=""
	I1003 18:12:49.685079   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:49.685399   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:50.184995   31648 type.go:168] "Request Body" body=""
	I1003 18:12:50.185063   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:50.185376   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:50.685005   31648 type.go:168] "Request Body" body=""
	I1003 18:12:50.685086   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:50.685402   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:50.685457   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:51.185264   31648 type.go:168] "Request Body" body=""
	I1003 18:12:51.185331   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:51.185656   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:51.685186   31648 type.go:168] "Request Body" body=""
	I1003 18:12:51.685261   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:51.685581   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:52.185171   31648 type.go:168] "Request Body" body=""
	I1003 18:12:52.185246   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:52.185567   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:52.685150   31648 type.go:168] "Request Body" body=""
	I1003 18:12:52.685238   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:52.685565   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:52.685619   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:53.185114   31648 type.go:168] "Request Body" body=""
	I1003 18:12:53.185178   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:53.185492   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:53.685072   31648 type.go:168] "Request Body" body=""
	I1003 18:12:53.685148   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:53.685473   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:54.185075   31648 type.go:168] "Request Body" body=""
	I1003 18:12:54.185146   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:54.185455   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:54.685278   31648 type.go:168] "Request Body" body=""
	I1003 18:12:54.685361   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:54.685694   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:54.685749   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:55.185253   31648 type.go:168] "Request Body" body=""
	I1003 18:12:55.185324   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:55.185627   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:55.685205   31648 type.go:168] "Request Body" body=""
	I1003 18:12:55.685291   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:55.685628   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:56.185471   31648 type.go:168] "Request Body" body=""
	I1003 18:12:56.185542   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:56.185859   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:56.685418   31648 type.go:168] "Request Body" body=""
	I1003 18:12:56.685501   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:56.685842   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:56.685903   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:57.185408   31648 type.go:168] "Request Body" body=""
	I1003 18:12:57.185483   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:57.185825   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:57.685392   31648 type.go:168] "Request Body" body=""
	I1003 18:12:57.685471   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:57.685812   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:58.185364   31648 type.go:168] "Request Body" body=""
	I1003 18:12:58.185431   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:58.185736   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:58.685296   31648 type.go:168] "Request Body" body=""
	I1003 18:12:58.685379   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:58.685735   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:12:59.185312   31648 type.go:168] "Request Body" body=""
	I1003 18:12:59.185381   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:59.185710   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:12:59.185769   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:12:59.685328   31648 type.go:168] "Request Body" body=""
	I1003 18:12:59.685404   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:12:59.685769   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:00.185320   31648 type.go:168] "Request Body" body=""
	I1003 18:13:00.185386   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:00.185713   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:00.685362   31648 type.go:168] "Request Body" body=""
	I1003 18:13:00.685457   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:00.685823   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:01.185697   31648 type.go:168] "Request Body" body=""
	I1003 18:13:01.185765   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:01.186114   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:01.186172   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:01.685762   31648 type.go:168] "Request Body" body=""
	I1003 18:13:01.685852   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:01.686240   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:02.185865   31648 type.go:168] "Request Body" body=""
	I1003 18:13:02.185951   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:02.186283   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:02.685917   31648 type.go:168] "Request Body" body=""
	I1003 18:13:02.686014   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:02.686332   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:03.185942   31648 type.go:168] "Request Body" body=""
	I1003 18:13:03.186032   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:03.186345   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:03.186397   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:03.684942   31648 type.go:168] "Request Body" body=""
	I1003 18:13:03.685055   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:03.685383   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:04.184939   31648 type.go:168] "Request Body" body=""
	I1003 18:13:04.185041   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:04.185351   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:04.685279   31648 type.go:168] "Request Body" body=""
	I1003 18:13:04.685358   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:04.685695   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:05.185233   31648 type.go:168] "Request Body" body=""
	I1003 18:13:05.185306   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:05.185608   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:05.685179   31648 type.go:168] "Request Body" body=""
	I1003 18:13:05.685255   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:05.685582   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:05.685657   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:06.185409   31648 type.go:168] "Request Body" body=""
	I1003 18:13:06.185478   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:06.185807   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:06.685397   31648 type.go:168] "Request Body" body=""
	I1003 18:13:06.685483   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:06.685824   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:07.185410   31648 type.go:168] "Request Body" body=""
	I1003 18:13:07.185478   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:07.185799   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:07.685361   31648 type.go:168] "Request Body" body=""
	I1003 18:13:07.685444   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:07.685776   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:07.685829   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:08.185354   31648 type.go:168] "Request Body" body=""
	I1003 18:13:08.185422   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:08.185738   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:08.685299   31648 type.go:168] "Request Body" body=""
	I1003 18:13:08.685380   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:08.685725   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:09.185279   31648 type.go:168] "Request Body" body=""
	I1003 18:13:09.185348   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:09.185678   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:09.685236   31648 type.go:168] "Request Body" body=""
	I1003 18:13:09.685312   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:09.685643   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:10.185169   31648 type.go:168] "Request Body" body=""
	I1003 18:13:10.185241   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:10.185552   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:10.185605   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:10.685136   31648 type.go:168] "Request Body" body=""
	I1003 18:13:10.685223   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:10.685575   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:11.185384   31648 type.go:168] "Request Body" body=""
	I1003 18:13:11.185459   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:11.185788   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:11.685352   31648 type.go:168] "Request Body" body=""
	I1003 18:13:11.685433   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:11.685753   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:12.185074   31648 type.go:168] "Request Body" body=""
	I1003 18:13:12.185141   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:12.185467   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:12.685018   31648 type.go:168] "Request Body" body=""
	I1003 18:13:12.685103   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:12.685412   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:12.685475   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:13.184997   31648 type.go:168] "Request Body" body=""
	I1003 18:13:13.185070   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:13.185403   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:13.684967   31648 type.go:168] "Request Body" body=""
	I1003 18:13:13.685061   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:13.685364   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:14.184923   31648 type.go:168] "Request Body" body=""
	I1003 18:13:14.185026   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:14.185364   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:14.685214   31648 type.go:168] "Request Body" body=""
	I1003 18:13:14.685280   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:14.685641   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:14.685714   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:15.185156   31648 type.go:168] "Request Body" body=""
	I1003 18:13:15.185255   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:15.185584   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:15.685142   31648 type.go:168] "Request Body" body=""
	I1003 18:13:15.685204   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:15.685537   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:16.185388   31648 type.go:168] "Request Body" body=""
	I1003 18:13:16.185470   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:16.185814   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:16.685411   31648 type.go:168] "Request Body" body=""
	I1003 18:13:16.685497   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:16.685863   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:16.685936   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:17.185442   31648 type.go:168] "Request Body" body=""
	I1003 18:13:17.185509   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:17.185829   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:17.685415   31648 type.go:168] "Request Body" body=""
	I1003 18:13:17.685525   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:17.685881   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:18.185495   31648 type.go:168] "Request Body" body=""
	I1003 18:13:18.185563   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:18.185876   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:18.685159   31648 type.go:168] "Request Body" body=""
	I1003 18:13:18.685230   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:18.685527   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:19.185084   31648 type.go:168] "Request Body" body=""
	I1003 18:13:19.185161   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:19.185450   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:19.185506   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:19.685103   31648 type.go:168] "Request Body" body=""
	I1003 18:13:19.685191   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:19.685616   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:20.185169   31648 type.go:168] "Request Body" body=""
	I1003 18:13:20.185250   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:20.185540   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:20.685137   31648 type.go:168] "Request Body" body=""
	I1003 18:13:20.685209   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:20.685542   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:21.185328   31648 type.go:168] "Request Body" body=""
	I1003 18:13:21.185409   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:21.185747   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:21.185800   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:21.685330   31648 type.go:168] "Request Body" body=""
	I1003 18:13:21.685393   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:21.685693   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:22.185267   31648 type.go:168] "Request Body" body=""
	I1003 18:13:22.185361   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:22.185713   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:22.685319   31648 type.go:168] "Request Body" body=""
	I1003 18:13:22.685385   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:22.685724   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:23.185388   31648 type.go:168] "Request Body" body=""
	I1003 18:13:23.185472   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:23.185812   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:23.185875   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:23.685447   31648 type.go:168] "Request Body" body=""
	I1003 18:13:23.685515   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:23.685833   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:24.185390   31648 type.go:168] "Request Body" body=""
	I1003 18:13:24.185457   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:24.185762   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:24.685669   31648 type.go:168] "Request Body" body=""
	I1003 18:13:24.685745   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:24.686090   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:25.185723   31648 type.go:168] "Request Body" body=""
	I1003 18:13:25.185792   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:25.186120   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:25.186180   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:25.685886   31648 type.go:168] "Request Body" body=""
	I1003 18:13:25.685961   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:25.686311   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:26.185007   31648 type.go:168] "Request Body" body=""
	I1003 18:13:26.185071   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:26.185380   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:26.684952   31648 type.go:168] "Request Body" body=""
	I1003 18:13:26.685041   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:26.685347   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:27.185970   31648 type.go:168] "Request Body" body=""
	I1003 18:13:27.186046   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:27.186356   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:27.186405   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:27.685041   31648 type.go:168] "Request Body" body=""
	I1003 18:13:27.685106   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:27.685416   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:28.185003   31648 type.go:168] "Request Body" body=""
	I1003 18:13:28.185070   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:28.185403   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:28.684968   31648 type.go:168] "Request Body" body=""
	I1003 18:13:28.685055   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:28.685378   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:29.184912   31648 type.go:168] "Request Body" body=""
	I1003 18:13:29.185004   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:29.185313   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:29.686012   31648 type.go:168] "Request Body" body=""
	I1003 18:13:29.686076   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:29.686383   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:29.686435   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:30.184929   31648 type.go:168] "Request Body" body=""
	I1003 18:13:30.185073   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:30.185387   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:30.684930   31648 type.go:168] "Request Body" body=""
	I1003 18:13:30.685049   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:30.685367   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:31.185212   31648 type.go:168] "Request Body" body=""
	I1003 18:13:31.185277   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:31.185571   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:31.685142   31648 type.go:168] "Request Body" body=""
	I1003 18:13:31.685208   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:31.685504   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:32.185085   31648 type.go:168] "Request Body" body=""
	I1003 18:13:32.185151   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:32.185469   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:32.185524   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:32.685051   31648 type.go:168] "Request Body" body=""
	I1003 18:13:32.685118   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:32.685424   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:33.185022   31648 type.go:168] "Request Body" body=""
	I1003 18:13:33.185092   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:33.185392   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:33.684962   31648 type.go:168] "Request Body" body=""
	I1003 18:13:33.685058   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:33.685365   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:34.184958   31648 type.go:168] "Request Body" body=""
	I1003 18:13:34.185041   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:34.185342   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:34.685149   31648 type.go:168] "Request Body" body=""
	I1003 18:13:34.685221   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:34.685506   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:34.685560   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:35.185096   31648 type.go:168] "Request Body" body=""
	I1003 18:13:35.185162   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:35.185507   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:35.685072   31648 type.go:168] "Request Body" body=""
	I1003 18:13:35.685138   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:35.685436   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:36.185249   31648 type.go:168] "Request Body" body=""
	I1003 18:13:36.185312   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:36.185619   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:36.685207   31648 type.go:168] "Request Body" body=""
	I1003 18:13:36.685270   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:36.685603   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:36.685664   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:37.185187   31648 type.go:168] "Request Body" body=""
	I1003 18:13:37.185258   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:37.185604   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:37.685170   31648 type.go:168] "Request Body" body=""
	I1003 18:13:37.685238   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:37.685540   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:38.185094   31648 type.go:168] "Request Body" body=""
	I1003 18:13:38.185165   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:38.185480   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:38.685085   31648 type.go:168] "Request Body" body=""
	I1003 18:13:38.685154   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:38.685491   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:39.185087   31648 type.go:168] "Request Body" body=""
	I1003 18:13:39.185161   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:39.185473   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:39.185530   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:39.685041   31648 type.go:168] "Request Body" body=""
	I1003 18:13:39.685104   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:39.685443   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:40.184993   31648 type.go:168] "Request Body" body=""
	I1003 18:13:40.185060   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:40.185369   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:40.684957   31648 type.go:168] "Request Body" body=""
	I1003 18:13:40.685046   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:40.685391   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:41.185256   31648 type.go:168] "Request Body" body=""
	I1003 18:13:41.185323   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:41.185632   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:41.185691   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:41.685166   31648 type.go:168] "Request Body" body=""
	I1003 18:13:41.685236   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:41.685524   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:42.185147   31648 type.go:168] "Request Body" body=""
	I1003 18:13:42.185215   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:42.185512   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:42.685072   31648 type.go:168] "Request Body" body=""
	I1003 18:13:42.685137   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:42.685438   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:43.185039   31648 type.go:168] "Request Body" body=""
	I1003 18:13:43.185104   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:43.185400   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:43.684960   31648 type.go:168] "Request Body" body=""
	I1003 18:13:43.685045   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:43.685352   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:43.685405   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:44.184941   31648 type.go:168] "Request Body" body=""
	I1003 18:13:44.185024   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:44.185317   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:44.685052   31648 type.go:168] "Request Body" body=""
	I1003 18:13:44.685120   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:44.685425   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:45.185055   31648 type.go:168] "Request Body" body=""
	I1003 18:13:45.185131   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:45.185445   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:45.685028   31648 type.go:168] "Request Body" body=""
	I1003 18:13:45.685092   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:45.685396   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:45.685450   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:46.185196   31648 type.go:168] "Request Body" body=""
	I1003 18:13:46.185259   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:46.185598   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:46.685146   31648 type.go:168] "Request Body" body=""
	I1003 18:13:46.685207   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:46.685520   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:47.185085   31648 type.go:168] "Request Body" body=""
	I1003 18:13:47.185146   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:47.185435   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:47.685023   31648 type.go:168] "Request Body" body=""
	I1003 18:13:47.685083   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:47.685387   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:48.184938   31648 type.go:168] "Request Body" body=""
	I1003 18:13:48.185024   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:48.185317   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:48.185366   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:48.685968   31648 type.go:168] "Request Body" body=""
	I1003 18:13:48.686071   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:48.686392   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:49.184927   31648 type.go:168] "Request Body" body=""
	I1003 18:13:49.185007   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:49.185301   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:49.685951   31648 type.go:168] "Request Body" body=""
	I1003 18:13:49.686058   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:49.686375   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:50.185987   31648 type.go:168] "Request Body" body=""
	I1003 18:13:50.186049   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:50.186339   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:50.186393   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:50.686008   31648 type.go:168] "Request Body" body=""
	I1003 18:13:50.686095   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:50.686413   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:51.185213   31648 type.go:168] "Request Body" body=""
	I1003 18:13:51.185281   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:51.185558   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:51.685097   31648 type.go:168] "Request Body" body=""
	I1003 18:13:51.685183   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:51.685518   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:52.185069   31648 type.go:168] "Request Body" body=""
	I1003 18:13:52.185132   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:52.185409   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:52.685038   31648 type.go:168] "Request Body" body=""
	I1003 18:13:52.685113   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:52.685416   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:52.685468   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:53.184948   31648 type.go:168] "Request Body" body=""
	I1003 18:13:53.185026   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:53.185309   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:53.685950   31648 type.go:168] "Request Body" body=""
	I1003 18:13:53.686043   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:53.686348   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:54.185948   31648 type.go:168] "Request Body" body=""
	I1003 18:13:54.186022   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:54.186302   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:54.685064   31648 type.go:168] "Request Body" body=""
	I1003 18:13:54.685138   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:54.685429   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:54.685486   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:55.185055   31648 type.go:168] "Request Body" body=""
	I1003 18:13:55.185122   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:55.185388   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:55.685066   31648 type.go:168] "Request Body" body=""
	I1003 18:13:55.685164   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:55.685462   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:56.185338   31648 type.go:168] "Request Body" body=""
	I1003 18:13:56.185406   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:56.185704   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:56.685239   31648 type.go:168] "Request Body" body=""
	I1003 18:13:56.685304   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:56.685629   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:56.685684   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:57.185240   31648 type.go:168] "Request Body" body=""
	I1003 18:13:57.185305   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:57.185635   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:57.685223   31648 type.go:168] "Request Body" body=""
	I1003 18:13:57.685287   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:57.685578   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:58.185123   31648 type.go:168] "Request Body" body=""
	I1003 18:13:58.185189   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:58.185504   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:58.685074   31648 type.go:168] "Request Body" body=""
	I1003 18:13:58.685137   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:58.685464   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:13:59.185038   31648 type.go:168] "Request Body" body=""
	I1003 18:13:59.185102   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:59.185391   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:13:59.185441   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:13:59.684997   31648 type.go:168] "Request Body" body=""
	I1003 18:13:59.685066   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:13:59.685383   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:00.184957   31648 type.go:168] "Request Body" body=""
	I1003 18:14:00.185041   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:00.185348   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:00.685990   31648 type.go:168] "Request Body" body=""
	I1003 18:14:00.686052   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:00.686352   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:01.185220   31648 type.go:168] "Request Body" body=""
	I1003 18:14:01.185292   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:01.185619   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:14:01.185673   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:14:01.685170   31648 type.go:168] "Request Body" body=""
	I1003 18:14:01.685244   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:01.685572   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:02.185133   31648 type.go:168] "Request Body" body=""
	I1003 18:14:02.185197   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:02.185506   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:02.685118   31648 type.go:168] "Request Body" body=""
	I1003 18:14:02.685184   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:02.685488   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:03.185090   31648 type.go:168] "Request Body" body=""
	I1003 18:14:03.185159   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:03.185488   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:03.685055   31648 type.go:168] "Request Body" body=""
	I1003 18:14:03.685119   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:03.685428   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:14:03.685480   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:14:04.185061   31648 type.go:168] "Request Body" body=""
	I1003 18:14:04.185131   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:04.185458   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:04.685298   31648 type.go:168] "Request Body" body=""
	I1003 18:14:04.685366   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:04.685670   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:05.185278   31648 type.go:168] "Request Body" body=""
	I1003 18:14:05.185348   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:05.185711   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:05.685243   31648 type.go:168] "Request Body" body=""
	I1003 18:14:05.685313   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:05.685621   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:14:05.685670   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:14:06.185390   31648 type.go:168] "Request Body" body=""
	I1003 18:14:06.185454   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:06.185796   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:06.685338   31648 type.go:168] "Request Body" body=""
	I1003 18:14:06.685404   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:06.685744   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:07.185312   31648 type.go:168] "Request Body" body=""
	I1003 18:14:07.185375   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:07.185694   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:07.685319   31648 type.go:168] "Request Body" body=""
	I1003 18:14:07.685388   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:07.685720   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:14:07.685775   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:14:08.185299   31648 type.go:168] "Request Body" body=""
	I1003 18:14:08.185362   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:08.185681   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:08.685362   31648 type.go:168] "Request Body" body=""
	I1003 18:14:08.685501   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:08.686040   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:09.185088   31648 type.go:168] "Request Body" body=""
	I1003 18:14:09.185166   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:09.185492   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:09.685168   31648 type.go:168] "Request Body" body=""
	I1003 18:14:09.685230   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:09.685527   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:10.185203   31648 type.go:168] "Request Body" body=""
	I1003 18:14:10.185266   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:10.185584   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:14:10.185635   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:14:10.685306   31648 type.go:168] "Request Body" body=""
	I1003 18:14:10.685367   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:10.685706   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:11.185477   31648 type.go:168] "Request Body" body=""
	I1003 18:14:11.185545   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:11.185858   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:11.685629   31648 type.go:168] "Request Body" body=""
	I1003 18:14:11.685690   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:11.686017   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:12.185788   31648 type.go:168] "Request Body" body=""
	I1003 18:14:12.185850   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:12.186194   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1003 18:14:12.186261   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-889240": dial tcp 192.168.49.2:8441: connect: connection refused
	I1003 18:14:12.685007   31648 type.go:168] "Request Body" body=""
	I1003 18:14:12.685075   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:12.685367   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:13.185078   31648 type.go:168] "Request Body" body=""
	I1003 18:14:13.185142   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:13.185434   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:13.685146   31648 type.go:168] "Request Body" body=""
	I1003 18:14:13.685215   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:13.685514   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:14.185200   31648 type.go:168] "Request Body" body=""
	I1003 18:14:14.185264   31648 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-889240" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1003 18:14:14.185577   31648 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1003 18:14:14.685359   31648 type.go:168] "Request Body" body=""
	W1003 18:14:14.685420   31648 node_ready.go:55] error getting node "functional-889240" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded
	I1003 18:14:14.685433   31648 node_ready.go:38] duration metric: took 6m0.000605507s for node "functional-889240" to be "Ready" ...
	I1003 18:14:14.688030   31648 out.go:203] 
	W1003 18:14:14.689379   31648 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1003 18:14:14.689402   31648 out.go:285] * 
	W1003 18:14:14.691089   31648 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:14:14.693118   31648 out.go:203] 
	
	
	==> CRI-O <==
	Oct 03 18:14:23 functional-889240 crio[2966]: time="2025-10-03T18:14:23.24477742Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=d4926c34-d9cd-40a9-8d85-0e2b6d94942f name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:23 functional-889240 crio[2966]: time="2025-10-03T18:14:23.536910262Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=23e33af5-a773-4b1a-9ba2-3601b67d5486 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:23 functional-889240 crio[2966]: time="2025-10-03T18:14:23.537046025Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=23e33af5-a773-4b1a-9ba2-3601b67d5486 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:23 functional-889240 crio[2966]: time="2025-10-03T18:14:23.537074554Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=23e33af5-a773-4b1a-9ba2-3601b67d5486 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:23 functional-889240 crio[2966]: time="2025-10-03T18:14:23.955913267Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=c519c75a-1d33-41c1-bd4e-7a62d7d1392c name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:23 functional-889240 crio[2966]: time="2025-10-03T18:14:23.956061121Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=c519c75a-1d33-41c1-bd4e-7a62d7d1392c name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:23 functional-889240 crio[2966]: time="2025-10-03T18:14:23.956092646Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=c519c75a-1d33-41c1-bd4e-7a62d7d1392c name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:23 functional-889240 crio[2966]: time="2025-10-03T18:14:23.978772267Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=e0c94d54-2133-40bc-8659-e30355633a00 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:23 functional-889240 crio[2966]: time="2025-10-03T18:14:23.978911816Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=e0c94d54-2133-40bc-8659-e30355633a00 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:23 functional-889240 crio[2966]: time="2025-10-03T18:14:23.978961101Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=e0c94d54-2133-40bc-8659-e30355633a00 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:24 functional-889240 crio[2966]: time="2025-10-03T18:14:24.013812991Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=4313db7f-572c-47d0-94d3-ee8c9b922da7 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:24 functional-889240 crio[2966]: time="2025-10-03T18:14:24.013949756Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=4313db7f-572c-47d0-94d3-ee8c9b922da7 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:24 functional-889240 crio[2966]: time="2025-10-03T18:14:24.014010006Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=4313db7f-572c-47d0-94d3-ee8c9b922da7 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:24 functional-889240 crio[2966]: time="2025-10-03T18:14:24.454255502Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=7c86583e-9529-491c-ad99-0b6b49fd0710 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:26 functional-889240 crio[2966]: time="2025-10-03T18:14:26.212073251Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=b8fcc7aa-65a7-4e09-aa5f-551109f28c2d name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:26 functional-889240 crio[2966]: time="2025-10-03T18:14:26.213006244Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=0c30200f-523d-48ee-b242-646cf265f37b name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:14:26 functional-889240 crio[2966]: time="2025-10-03T18:14:26.213822997Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-889240/kube-scheduler" id=d1827c2d-46bf-4393-84fd-31f5f31d496c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:26 functional-889240 crio[2966]: time="2025-10-03T18:14:26.214091447Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:14:26 functional-889240 crio[2966]: time="2025-10-03T18:14:26.217466916Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:14:26 functional-889240 crio[2966]: time="2025-10-03T18:14:26.218064528Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:14:26 functional-889240 crio[2966]: time="2025-10-03T18:14:26.232551959Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=d1827c2d-46bf-4393-84fd-31f5f31d496c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:26 functional-889240 crio[2966]: time="2025-10-03T18:14:26.234364909Z" level=info msg="createCtr: deleting container ID 130468ffc22cf644b2e55a636fafe10cef3cf79024aeb175ac83c6a8a38db2d5 from idIndex" id=d1827c2d-46bf-4393-84fd-31f5f31d496c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:26 functional-889240 crio[2966]: time="2025-10-03T18:14:26.234424381Z" level=info msg="createCtr: removing container 130468ffc22cf644b2e55a636fafe10cef3cf79024aeb175ac83c6a8a38db2d5" id=d1827c2d-46bf-4393-84fd-31f5f31d496c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:26 functional-889240 crio[2966]: time="2025-10-03T18:14:26.234479854Z" level=info msg="createCtr: deleting container 130468ffc22cf644b2e55a636fafe10cef3cf79024aeb175ac83c6a8a38db2d5 from storage" id=d1827c2d-46bf-4393-84fd-31f5f31d496c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:14:26 functional-889240 crio[2966]: time="2025-10-03T18:14:26.237704126Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-889240_kube-system_7dadd1df42d6a2c3d1907f134f7d5ea7_0" id=d1827c2d-46bf-4393-84fd-31f5f31d496c name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:14:27.885660    5504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:14:27.886260    5504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:14:27.887801    5504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:14:27.888231    5504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:14:27.889754    5504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 3 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001870] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.374530] i8042: Warning: Keylock active
	[  +0.010846] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003424] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000658] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000637] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000691] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.479345] block sda: the capability attribute has been deprecated.
	[  +0.086934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025583] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.992810] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:14:27 up 56 min,  0 user,  load average: 0.12, 0.03, 0.04
	Linux functional-889240 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 03 18:14:21 functional-889240 kubelet[1817]:  > podSandboxID="bb5ee21569299932af0968d7ca6c3e44bd5f6c5d7c8e5900d54800ccc90ccf96"
	Oct 03 18:14:21 functional-889240 kubelet[1817]: E1003 18:14:21.237934    1817 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:14:21 functional-889240 kubelet[1817]:         container kube-apiserver start failed in pod kube-apiserver-functional-889240_kube-system(c6bcf20a60b81dff297fc63f5b978297): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:14:21 functional-889240 kubelet[1817]:  > logger="UnhandledError"
	Oct 03 18:14:21 functional-889240 kubelet[1817]: E1003 18:14:21.237961    1817 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-889240" podUID="c6bcf20a60b81dff297fc63f5b978297"
	Oct 03 18:14:21 functional-889240 kubelet[1817]: E1003 18:14:21.498941    1817 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-889240.186b0d404ae58a04\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-889240.186b0d404ae58a04  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-889240,UID:functional-889240,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-889240 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-889240,},FirstTimestamp:2025-10-03 18:04:09.203935748 +0000 UTC m=+0.376858749,LastTimestamp:2025-10-03 18:04:09.206706066 +0000 UTC m=+0.379629064,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,Reporting
Instance:functional-889240,}"
	Oct 03 18:14:22 functional-889240 kubelet[1817]: E1003 18:14:22.212210    1817 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-889240\" not found" node="functional-889240"
	Oct 03 18:14:22 functional-889240 kubelet[1817]: E1003 18:14:22.240119    1817 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:14:22 functional-889240 kubelet[1817]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:14:22 functional-889240 kubelet[1817]:  > podSandboxID="65835069a3bb03e380bb50149082d0338f4c2642bf6aea8dacf1e0715b6f21c8"
	Oct 03 18:14:22 functional-889240 kubelet[1817]: E1003 18:14:22.240225    1817 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:14:22 functional-889240 kubelet[1817]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-889240_kube-system(7e715cb6024854d45a9fa99576167e43): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:14:22 functional-889240 kubelet[1817]:  > logger="UnhandledError"
	Oct 03 18:14:22 functional-889240 kubelet[1817]: E1003 18:14:22.240257    1817 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-889240" podUID="7e715cb6024854d45a9fa99576167e43"
	Oct 03 18:14:23 functional-889240 kubelet[1817]: E1003 18:14:23.891877    1817 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-889240?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 03 18:14:24 functional-889240 kubelet[1817]: I1003 18:14:24.090864    1817 kubelet_node_status.go:75] "Attempting to register node" node="functional-889240"
	Oct 03 18:14:24 functional-889240 kubelet[1817]: E1003 18:14:24.091265    1817 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-889240"
	Oct 03 18:14:26 functional-889240 kubelet[1817]: E1003 18:14:26.211532    1817 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-889240\" not found" node="functional-889240"
	Oct 03 18:14:26 functional-889240 kubelet[1817]: E1003 18:14:26.238091    1817 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:14:26 functional-889240 kubelet[1817]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:14:26 functional-889240 kubelet[1817]:  > podSandboxID="9ea0d784c2fd12bcd1db05033ba2964baa15be14deeae00b6508f924c37e3473"
	Oct 03 18:14:26 functional-889240 kubelet[1817]: E1003 18:14:26.238205    1817 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:14:26 functional-889240 kubelet[1817]:         container kube-scheduler start failed in pod kube-scheduler-functional-889240_kube-system(7dadd1df42d6a2c3d1907f134f7d5ea7): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:14:26 functional-889240 kubelet[1817]:  > logger="UnhandledError"
	Oct 03 18:14:26 functional-889240 kubelet[1817]: E1003 18:14:26.238247    1817 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-889240" podUID="7dadd1df42d6a2c3d1907f134f7d5ea7"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-889240 -n functional-889240
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-889240 -n functional-889240: exit status 2 (300.548686ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-889240" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (2.03s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (733.9s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-889240 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-889240 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (12m12.046396022s)

                                                
                                                
-- stdout --
	* [functional-889240] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21625
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-889240" primary control-plane node in "functional-889240" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.785786ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001254073s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001316832s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00135784s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500810972s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001083242s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001112366s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001257154s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500810972s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001083242s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001112366s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001257154s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-amd64 start -p functional-889240 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:776: restart took 12m12.048528136s for "functional-889240" cluster.
I1003 18:26:40.722851   12212 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-889240
helpers_test.go:243: (dbg) docker inspect functional-889240:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a",
	        "Created": "2025-10-03T17:59:56.619817507Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 26766,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T17:59:56.652603806Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/hostname",
	        "HostsPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/hosts",
	        "LogPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a-json.log",
	        "Name": "/functional-889240",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-889240:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-889240",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a",
	                "LowerDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2-init/diff:/var/lib/docker/overlay2/6a517a7375440eba803d7b83fe1e0821915758396dd4d8556ab64fff322a60c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-889240",
	                "Source": "/var/lib/docker/volumes/functional-889240/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-889240",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-889240",
	                "name.minikube.sigs.k8s.io": "functional-889240",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "da15d31dc23bdd4694ae9e3b61015d7ce0d61668c73d3e386422834c6f0321d8",
	            "SandboxKey": "/var/run/docker/netns/da15d31dc23b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-889240": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:9e:1d:e9:d9:ce",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "03281bed183d0817c0bc237b5c25093fc10222138aedde4c7deef5823759fa24",
	                    "EndpointID": "28fa584fdd6e253816ae08a2460ef02b91085c8a7996d55008876e3bd65bbc7e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-889240",
	                        "9f4f0f10b4a9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-889240 -n functional-889240
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-889240 -n functional-889240: exit status 2 (310.451494ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 logs -n 25
helpers_test.go:260: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ unpause │ nospam-093146 --log_dir /tmp/nospam-093146 unpause                                                            │ nospam-093146     │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ unpause │ nospam-093146 --log_dir /tmp/nospam-093146 unpause                                                            │ nospam-093146     │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ unpause │ nospam-093146 --log_dir /tmp/nospam-093146 unpause                                                            │ nospam-093146     │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ stop    │ nospam-093146 --log_dir /tmp/nospam-093146 stop                                                               │ nospam-093146     │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ stop    │ nospam-093146 --log_dir /tmp/nospam-093146 stop                                                               │ nospam-093146     │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ stop    │ nospam-093146 --log_dir /tmp/nospam-093146 stop                                                               │ nospam-093146     │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ delete  │ -p nospam-093146                                                                                              │ nospam-093146     │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ start   │ -p functional-889240 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │                     │
	│ start   │ -p functional-889240 --alsologtostderr -v=8                                                                   │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:08 UTC │                     │
	│ cache   │ functional-889240 cache add registry.k8s.io/pause:3.1                                                         │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ cache   │ functional-889240 cache add registry.k8s.io/pause:3.3                                                         │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ cache   │ functional-889240 cache add registry.k8s.io/pause:latest                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ cache   │ functional-889240 cache add minikube-local-cache-test:functional-889240                                       │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ cache   │ functional-889240 cache delete minikube-local-cache-test:functional-889240                                    │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ ssh     │ functional-889240 ssh sudo crictl images                                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ ssh     │ functional-889240 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ ssh     │ functional-889240 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │                     │
	│ cache   │ functional-889240 cache reload                                                                                │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ ssh     │ functional-889240 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ kubectl │ functional-889240 kubectl -- --context functional-889240 get pods                                             │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │                     │
	│ start   │ -p functional-889240 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:14:28
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:14:28.726754   38063 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:14:28.726997   38063 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:14:28.727000   38063 out.go:374] Setting ErrFile to fd 2...
	I1003 18:14:28.727003   38063 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:14:28.727268   38063 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:14:28.727968   38063 out.go:368] Setting JSON to false
	I1003 18:14:28.729004   38063 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3420,"bootTime":1759511849,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:14:28.729075   38063 start.go:140] virtualization: kvm guest
	I1003 18:14:28.731008   38063 out.go:179] * [functional-889240] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 18:14:28.732488   38063 notify.go:220] Checking for updates...
	I1003 18:14:28.732492   38063 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:14:28.733579   38063 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:14:28.734939   38063 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:14:28.736179   38063 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:14:28.737411   38063 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:14:28.738587   38063 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:14:28.740087   38063 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:14:28.740180   38063 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:14:28.764594   38063 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:14:28.764685   38063 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:14:28.818292   38063 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-03 18:14:28.807876558 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:14:28.818395   38063 docker.go:318] overlay module found
	I1003 18:14:28.820263   38063 out.go:179] * Using the docker driver based on existing profile
	I1003 18:14:28.821380   38063 start.go:304] selected driver: docker
	I1003 18:14:28.821386   38063 start.go:924] validating driver "docker" against &{Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:14:28.821453   38063 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:14:28.821525   38063 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:14:28.873759   38063 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-03 18:14:28.863222744 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:14:28.874408   38063 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:14:28.874443   38063 cni.go:84] Creating CNI manager for ""
	I1003 18:14:28.874490   38063 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 18:14:28.874537   38063 start.go:348] cluster config:
	{Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:14:28.876500   38063 out.go:179] * Starting "functional-889240" primary control-plane node in "functional-889240" cluster
	I1003 18:14:28.877706   38063 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:14:28.878837   38063 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:14:28.879769   38063 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:14:28.879795   38063 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 18:14:28.879802   38063 cache.go:58] Caching tarball of preloaded images
	I1003 18:14:28.879865   38063 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:14:28.879873   38063 preload.go:233] Found /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1003 18:14:28.879879   38063 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 18:14:28.879967   38063 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/config.json ...
	I1003 18:14:28.899017   38063 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 18:14:28.899026   38063 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 18:14:28.899040   38063 cache.go:232] Successfully downloaded all kic artifacts
	I1003 18:14:28.899069   38063 start.go:360] acquireMachinesLock for functional-889240: {Name:mk6750a9fb1c1c3747b0abf2aebe2a2d0047ae3a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:14:28.899117   38063 start.go:364] duration metric: took 35.993µs to acquireMachinesLock for "functional-889240"
	I1003 18:14:28.899130   38063 start.go:96] Skipping create...Using existing machine configuration
	I1003 18:14:28.899133   38063 fix.go:54] fixHost starting: 
	I1003 18:14:28.899327   38063 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
	I1003 18:14:28.916111   38063 fix.go:112] recreateIfNeeded on functional-889240: state=Running err=<nil>
	W1003 18:14:28.916134   38063 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 18:14:28.918050   38063 out.go:252] * Updating the running docker "functional-889240" container ...
	I1003 18:14:28.918084   38063 machine.go:93] provisionDockerMachine start ...
	I1003 18:14:28.918165   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:28.934689   38063 main.go:141] libmachine: Using SSH client type: native
	I1003 18:14:28.934913   38063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:14:28.934921   38063 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 18:14:29.076697   38063 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-889240
	
	I1003 18:14:29.076727   38063 ubuntu.go:182] provisioning hostname "functional-889240"
	I1003 18:14:29.076782   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:29.092887   38063 main.go:141] libmachine: Using SSH client type: native
	I1003 18:14:29.093101   38063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:14:29.093108   38063 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-889240 && echo "functional-889240" | sudo tee /etc/hostname
	I1003 18:14:29.242886   38063 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-889240
	
	I1003 18:14:29.242996   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:29.260006   38063 main.go:141] libmachine: Using SSH client type: native
	I1003 18:14:29.260203   38063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:14:29.260220   38063 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-889240' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-889240/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-889240' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:14:29.401432   38063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:14:29.401463   38063 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8669/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8669/.minikube}
	I1003 18:14:29.401485   38063 ubuntu.go:190] setting up certificates
	I1003 18:14:29.401496   38063 provision.go:84] configureAuth start
	I1003 18:14:29.401542   38063 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-889240
	I1003 18:14:29.417679   38063 provision.go:143] copyHostCerts
	I1003 18:14:29.417732   38063 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem, removing ...
	I1003 18:14:29.417754   38063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:14:29.417818   38063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem (1082 bytes)
	I1003 18:14:29.417930   38063 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem, removing ...
	I1003 18:14:29.417934   38063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:14:29.417959   38063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem (1123 bytes)
	I1003 18:14:29.418062   38063 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem, removing ...
	I1003 18:14:29.418066   38063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:14:29.418091   38063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem (1675 bytes)
	I1003 18:14:29.418151   38063 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem org=jenkins.functional-889240 san=[127.0.0.1 192.168.49.2 functional-889240 localhost minikube]
	I1003 18:14:29.517156   38063 provision.go:177] copyRemoteCerts
	I1003 18:14:29.517211   38063 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:14:29.517244   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:29.534610   38063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:14:29.634576   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 18:14:29.651152   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1003 18:14:29.667404   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 18:14:29.683300   38063 provision.go:87] duration metric: took 281.795524ms to configureAuth
	I1003 18:14:29.683315   38063 ubuntu.go:206] setting minikube options for container-runtime
	I1003 18:14:29.683451   38063 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:14:29.683536   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:29.701238   38063 main.go:141] libmachine: Using SSH client type: native
	I1003 18:14:29.701444   38063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:14:29.701460   38063 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 18:14:29.964774   38063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 18:14:29.964789   38063 machine.go:96] duration metric: took 1.046699275s to provisionDockerMachine
	I1003 18:14:29.964799   38063 start.go:293] postStartSetup for "functional-889240" (driver="docker")
	I1003 18:14:29.964807   38063 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:14:29.964862   38063 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:14:29.964919   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:29.982141   38063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:14:30.082849   38063 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:14:30.086167   38063 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:14:30.086182   38063 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 18:14:30.086190   38063 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/addons for local assets ...
	I1003 18:14:30.086245   38063 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/files for local assets ...
	I1003 18:14:30.086322   38063 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> 122122.pem in /etc/ssl/certs
	I1003 18:14:30.086390   38063 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/test/nested/copy/12212/hosts -> hosts in /etc/test/nested/copy/12212
	I1003 18:14:30.086418   38063 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/12212
	I1003 18:14:30.093540   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:14:30.109775   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/test/nested/copy/12212/hosts --> /etc/test/nested/copy/12212/hosts (40 bytes)
	I1003 18:14:30.125563   38063 start.go:296] duration metric: took 160.752264ms for postStartSetup
	I1003 18:14:30.125613   38063 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:14:30.125652   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:30.142705   38063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:14:30.239819   38063 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:14:30.244462   38063 fix.go:56] duration metric: took 1.345323072s for fixHost
	I1003 18:14:30.244476   38063 start.go:83] releasing machines lock for "functional-889240", held for 1.345352654s
	I1003 18:14:30.244534   38063 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-889240
	I1003 18:14:30.261148   38063 ssh_runner.go:195] Run: cat /version.json
	I1003 18:14:30.261181   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:30.261277   38063 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:14:30.261317   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:30.278533   38063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:14:30.278911   38063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:14:30.374843   38063 ssh_runner.go:195] Run: systemctl --version
	I1003 18:14:30.426119   38063 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 18:14:30.460148   38063 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 18:14:30.464555   38063 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 18:14:30.464600   38063 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 18:14:30.471950   38063 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1003 18:14:30.471961   38063 start.go:495] detecting cgroup driver to use...
	I1003 18:14:30.472000   38063 detect.go:190] detected "systemd" cgroup driver on host os
	I1003 18:14:30.472044   38063 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 18:14:30.485257   38063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 18:14:30.496477   38063 docker.go:218] disabling cri-docker service (if available) ...
	I1003 18:14:30.496516   38063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 18:14:30.510101   38063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 18:14:30.521418   38063 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 18:14:30.603143   38063 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 18:14:30.686683   38063 docker.go:234] disabling docker service ...
	I1003 18:14:30.686723   38063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 18:14:30.700010   38063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 18:14:30.711397   38063 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 18:14:30.789401   38063 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 18:14:30.867745   38063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 18:14:30.879595   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:14:30.892654   38063 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 18:14:30.892698   38063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:14:30.901033   38063 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 18:14:30.901080   38063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:14:30.909297   38063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:14:30.917346   38063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:14:30.925200   38063 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:14:30.932963   38063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:14:30.941075   38063 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:14:30.948857   38063 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:14:30.956661   38063 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:14:30.963293   38063 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:14:30.969876   38063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:14:31.048833   38063 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 18:14:31.154686   38063 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 18:14:31.154732   38063 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 18:14:31.158463   38063 start.go:563] Will wait 60s for crictl version
	I1003 18:14:31.158505   38063 ssh_runner.go:195] Run: which crictl
	I1003 18:14:31.161802   38063 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 18:14:31.185028   38063 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 18:14:31.185099   38063 ssh_runner.go:195] Run: crio --version
	I1003 18:14:31.211351   38063 ssh_runner.go:195] Run: crio --version
	I1003 18:14:31.239599   38063 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 18:14:31.241121   38063 cli_runner.go:164] Run: docker network inspect functional-889240 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:14:31.257340   38063 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 18:14:31.263166   38063 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1003 18:14:31.264167   38063 kubeadm.go:883] updating cluster {Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 18:14:31.264267   38063 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:14:31.264310   38063 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:14:31.293848   38063 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:14:31.293858   38063 crio.go:433] Images already preloaded, skipping extraction
	I1003 18:14:31.293907   38063 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:14:31.319316   38063 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:14:31.319326   38063 cache_images.go:85] Images are preloaded, skipping loading
	I1003 18:14:31.319331   38063 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1003 18:14:31.319423   38063 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-889240 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 18:14:31.319482   38063 ssh_runner.go:195] Run: crio config
	I1003 18:14:31.363053   38063 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1003 18:14:31.363070   38063 cni.go:84] Creating CNI manager for ""
	I1003 18:14:31.363079   38063 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 18:14:31.363097   38063 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 18:14:31.363115   38063 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-889240 NodeName:functional-889240 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 18:14:31.363211   38063 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-889240"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:14:31.363260   38063 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 18:14:31.371060   38063 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:14:31.371113   38063 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 18:14:31.378260   38063 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1003 18:14:31.389622   38063 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 18:14:31.401169   38063 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1003 18:14:31.413278   38063 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1003 18:14:31.416670   38063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:14:31.493997   38063 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:14:31.506325   38063 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240 for IP: 192.168.49.2
	I1003 18:14:31.506337   38063 certs.go:195] generating shared ca certs ...
	I1003 18:14:31.506355   38063 certs.go:227] acquiring lock for ca certs: {Name:mk92d1e8e469cb44d9924ff8abf5ecf0a8ce4e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:14:31.506504   38063 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key
	I1003 18:14:31.506539   38063 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key
	I1003 18:14:31.506544   38063 certs.go:257] generating profile certs ...
	I1003 18:14:31.506611   38063 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.key
	I1003 18:14:31.506654   38063 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.key.eb3f8f7c
	I1003 18:14:31.506684   38063 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.key
	I1003 18:14:31.506800   38063 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem (1338 bytes)
	W1003 18:14:31.506838   38063 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212_empty.pem, impossibly tiny 0 bytes
	I1003 18:14:31.506844   38063 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:14:31.506863   38063 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:14:31.506885   38063 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:14:31.506914   38063 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem (1675 bytes)
	I1003 18:14:31.506949   38063 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:14:31.507555   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:14:31.523949   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 18:14:31.540075   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:14:31.556229   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:14:31.572472   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 18:14:31.588618   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:14:31.604606   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:14:31.620082   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 18:14:31.636014   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:14:31.652102   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem --> /usr/share/ca-certificates/12212.pem (1338 bytes)
	I1003 18:14:31.668081   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /usr/share/ca-certificates/122122.pem (1708 bytes)
	I1003 18:14:31.684503   38063 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:14:31.696104   38063 ssh_runner.go:195] Run: openssl version
	I1003 18:14:31.701806   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:14:31.709474   38063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:14:31.712729   38063 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:14:31.712776   38063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:14:31.746262   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:14:31.754238   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12212.pem && ln -fs /usr/share/ca-certificates/12212.pem /etc/ssl/certs/12212.pem"
	I1003 18:14:31.762041   38063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12212.pem
	I1003 18:14:31.765354   38063 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:14:31.765385   38063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12212.pem
	I1003 18:14:31.799341   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12212.pem /etc/ssl/certs/51391683.0"
	I1003 18:14:31.807532   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122122.pem && ln -fs /usr/share/ca-certificates/122122.pem /etc/ssl/certs/122122.pem"
	I1003 18:14:31.815668   38063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122122.pem
	I1003 18:14:31.819149   38063 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:14:31.819195   38063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122122.pem
	I1003 18:14:31.853378   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122122.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:14:31.861557   38063 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:14:31.865026   38063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 18:14:31.898216   38063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 18:14:31.931439   38063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 18:14:31.964848   38063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 18:14:31.997996   38063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 18:14:32.031331   38063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 18:14:32.064773   38063 kubeadm.go:400] StartCluster: {Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:14:32.064844   38063 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:14:32.064884   38063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:14:32.091563   38063 cri.go:89] found id: ""
	I1003 18:14:32.091628   38063 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:14:32.099575   38063 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1003 18:14:32.099617   38063 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1003 18:14:32.099649   38063 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 18:14:32.106476   38063 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:14:32.106922   38063 kubeconfig.go:125] found "functional-889240" server: "https://192.168.49.2:8441"
	I1003 18:14:32.108169   38063 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 18:14:32.115724   38063 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-03 18:00:01.716218369 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-03 18:14:31.411258298 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1003 18:14:32.115731   38063 kubeadm.go:1160] stopping kube-system containers ...
	I1003 18:14:32.115740   38063 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1003 18:14:32.115779   38063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:14:32.142745   38063 cri.go:89] found id: ""
	I1003 18:14:32.142803   38063 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1003 18:14:32.181602   38063 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:14:32.189432   38063 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct  3 18:04 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Oct  3 18:04 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Oct  3 18:04 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct  3 18:04 /etc/kubernetes/scheduler.conf
	
	I1003 18:14:32.189481   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1003 18:14:32.196894   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1003 18:14:32.203921   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:14:32.203965   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:14:32.210881   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1003 18:14:32.217766   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:14:32.217803   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:14:32.224334   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1003 18:14:32.231030   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:14:32.231065   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:14:32.237472   38063 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 18:14:32.244457   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1003 18:14:32.283268   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1003 18:14:33.742947   38063 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.459652347s)
	I1003 18:14:33.743017   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1003 18:14:33.898116   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1003 18:14:33.942573   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1003 18:14:33.988522   38063 api_server.go:52] waiting for apiserver process to appear ...
	I1003 18:14:33.988576   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:34.488790   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:34.989160   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:35.489680   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:35.988868   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:36.488719   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:36.989189   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:37.488931   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:37.988689   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:38.489192   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:38.988747   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:39.488853   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:39.988726   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:40.488885   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:40.988836   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:41.489087   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:41.989102   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:42.489308   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:42.989350   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:43.489437   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:43.989370   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:44.489479   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:44.989473   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:45.489475   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:45.989163   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:46.489071   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:46.989061   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:47.489362   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:47.989160   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:48.489058   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:48.989044   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:49.489308   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:49.989261   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:50.489305   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:50.989055   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:51.488843   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:51.989620   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:52.489351   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:52.989238   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:53.489255   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:53.989220   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:54.488852   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:54.988693   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:55.488676   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:55.989529   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:56.488743   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:56.988770   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:57.489696   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:57.989499   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:58.489418   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:58.988677   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:59.488958   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:59.988929   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:00.488655   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:00.989293   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:01.489448   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:01.989466   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:02.489205   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:02.989600   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:03.489423   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:03.989351   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:04.489050   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:04.989610   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:05.489685   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:05.988959   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:06.488882   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:06.988912   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:07.488777   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:07.988801   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:08.489543   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:08.989468   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:09.489298   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:09.989123   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:10.489003   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:10.988801   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:11.489568   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:11.989184   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:12.489371   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:12.989143   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:13.488941   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:13.988874   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:14.489673   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:14.989633   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:15.489486   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:15.989281   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:16.489642   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:16.989478   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:17.489111   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:17.989045   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:18.488802   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:18.988734   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:19.489569   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:19.989541   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:20.488747   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:20.989602   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:21.488839   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:21.989691   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:22.489669   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:22.989667   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:23.489632   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:23.989542   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:24.489501   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:24.989204   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:25.488757   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:25.989320   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:26.489097   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:26.988902   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:27.489585   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:27.989335   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:28.489024   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:28.988936   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:29.488782   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:29.989706   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:30.489391   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:30.989093   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:31.488928   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:31.988795   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:32.488796   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:32.988671   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:33.489525   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:33.989163   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:33.989216   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:34.014490   38063 cri.go:89] found id: ""
	I1003 18:15:34.014506   38063 logs.go:282] 0 containers: []
	W1003 18:15:34.014513   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:34.014518   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:34.014556   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:34.039203   38063 cri.go:89] found id: ""
	I1003 18:15:34.039217   38063 logs.go:282] 0 containers: []
	W1003 18:15:34.039223   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:34.039227   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:34.039266   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:34.064423   38063 cri.go:89] found id: ""
	I1003 18:15:34.064440   38063 logs.go:282] 0 containers: []
	W1003 18:15:34.064448   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:34.064452   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:34.064494   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:34.089636   38063 cri.go:89] found id: ""
	I1003 18:15:34.089650   38063 logs.go:282] 0 containers: []
	W1003 18:15:34.089661   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:34.089665   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:34.089707   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:34.114198   38063 cri.go:89] found id: ""
	I1003 18:15:34.114211   38063 logs.go:282] 0 containers: []
	W1003 18:15:34.114217   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:34.114221   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:34.114261   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:34.138167   38063 cri.go:89] found id: ""
	I1003 18:15:34.138180   38063 logs.go:282] 0 containers: []
	W1003 18:15:34.138186   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:34.138190   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:34.138234   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:34.163057   38063 cri.go:89] found id: ""
	I1003 18:15:34.163071   38063 logs.go:282] 0 containers: []
	W1003 18:15:34.163079   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:34.163090   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:34.163102   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:34.230868   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:34.230885   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:34.242117   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:34.242134   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:34.296197   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:34.289745    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:34.290228    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:34.291731    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:34.292260    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:34.293746    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:34.289745    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:34.290228    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:34.291731    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:34.292260    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:34.293746    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:15:34.296208   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:34.296218   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:34.353696   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:34.353715   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:36.882850   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:36.893827   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:36.893878   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:36.918928   38063 cri.go:89] found id: ""
	I1003 18:15:36.918945   38063 logs.go:282] 0 containers: []
	W1003 18:15:36.918954   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:36.918960   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:36.919024   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:36.943500   38063 cri.go:89] found id: ""
	I1003 18:15:36.943516   38063 logs.go:282] 0 containers: []
	W1003 18:15:36.943524   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:36.943529   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:36.943571   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:36.967892   38063 cri.go:89] found id: ""
	I1003 18:15:36.967909   38063 logs.go:282] 0 containers: []
	W1003 18:15:36.967917   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:36.967921   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:36.967961   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:36.992302   38063 cri.go:89] found id: ""
	I1003 18:15:36.992316   38063 logs.go:282] 0 containers: []
	W1003 18:15:36.992322   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:36.992326   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:36.992371   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:37.017414   38063 cri.go:89] found id: ""
	I1003 18:15:37.017429   38063 logs.go:282] 0 containers: []
	W1003 18:15:37.017435   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:37.017440   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:37.017483   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:37.042577   38063 cri.go:89] found id: ""
	I1003 18:15:37.042596   38063 logs.go:282] 0 containers: []
	W1003 18:15:37.042601   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:37.042606   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:37.042652   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:37.067424   38063 cri.go:89] found id: ""
	I1003 18:15:37.067438   38063 logs.go:282] 0 containers: []
	W1003 18:15:37.067444   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:37.067451   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:37.067466   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:37.133058   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:37.133076   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:37.144095   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:37.144109   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:37.201432   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:37.195051    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:37.195552    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:37.197089    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:37.197493    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:37.198600    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:37.195051    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:37.195552    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:37.197089    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:37.197493    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:37.198600    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:15:37.201453   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:37.201464   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:37.264020   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:37.264041   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:39.793917   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:39.804160   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:39.804201   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:39.828532   38063 cri.go:89] found id: ""
	I1003 18:15:39.828545   38063 logs.go:282] 0 containers: []
	W1003 18:15:39.828551   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:39.828557   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:39.828595   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:39.854181   38063 cri.go:89] found id: ""
	I1003 18:15:39.854194   38063 logs.go:282] 0 containers: []
	W1003 18:15:39.854199   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:39.854203   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:39.854241   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:39.878636   38063 cri.go:89] found id: ""
	I1003 18:15:39.878649   38063 logs.go:282] 0 containers: []
	W1003 18:15:39.878655   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:39.878665   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:39.878714   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:39.903647   38063 cri.go:89] found id: ""
	I1003 18:15:39.903662   38063 logs.go:282] 0 containers: []
	W1003 18:15:39.903672   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:39.903678   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:39.903727   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:39.928358   38063 cri.go:89] found id: ""
	I1003 18:15:39.928371   38063 logs.go:282] 0 containers: []
	W1003 18:15:39.928377   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:39.928382   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:39.928425   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:39.952698   38063 cri.go:89] found id: ""
	I1003 18:15:39.952712   38063 logs.go:282] 0 containers: []
	W1003 18:15:39.952718   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:39.952722   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:39.952770   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:39.977762   38063 cri.go:89] found id: ""
	I1003 18:15:39.977779   38063 logs.go:282] 0 containers: []
	W1003 18:15:39.977788   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:39.977798   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:39.977810   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:40.047503   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:40.047521   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:40.058597   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:40.058612   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:40.113456   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:40.107101    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:40.107593    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:40.109120    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:40.109527    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:40.111020    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:40.107101    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:40.107593    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:40.109120    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:40.109527    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:40.111020    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:15:40.113474   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:40.113485   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:40.173884   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:40.173904   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:42.702098   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:42.712135   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:42.712176   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:42.735423   38063 cri.go:89] found id: ""
	I1003 18:15:42.735438   38063 logs.go:282] 0 containers: []
	W1003 18:15:42.735445   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:42.735450   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:42.735502   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:42.758834   38063 cri.go:89] found id: ""
	I1003 18:15:42.758847   38063 logs.go:282] 0 containers: []
	W1003 18:15:42.758853   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:42.758857   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:42.758918   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:42.782548   38063 cri.go:89] found id: ""
	I1003 18:15:42.782564   38063 logs.go:282] 0 containers: []
	W1003 18:15:42.782573   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:42.782578   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:42.782631   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:42.808289   38063 cri.go:89] found id: ""
	I1003 18:15:42.808307   38063 logs.go:282] 0 containers: []
	W1003 18:15:42.808315   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:42.808321   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:42.808362   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:42.832106   38063 cri.go:89] found id: ""
	I1003 18:15:42.832120   38063 logs.go:282] 0 containers: []
	W1003 18:15:42.832126   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:42.832136   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:42.832178   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:42.856681   38063 cri.go:89] found id: ""
	I1003 18:15:42.856697   38063 logs.go:282] 0 containers: []
	W1003 18:15:42.856704   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:42.856708   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:42.856753   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:42.880778   38063 cri.go:89] found id: ""
	I1003 18:15:42.880793   38063 logs.go:282] 0 containers: []
	W1003 18:15:42.880799   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:42.880806   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:42.880815   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:42.891568   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:42.891591   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:42.944856   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:42.938479    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:42.938960    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:42.940463    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:42.940834    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:42.942358    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:42.938479    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:42.938960    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:42.940463    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:42.940834    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:42.942358    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:15:42.944869   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:42.944883   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:43.008325   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:43.008342   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:43.034919   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:43.034934   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:45.601892   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:45.612293   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:45.612337   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:45.636800   38063 cri.go:89] found id: ""
	I1003 18:15:45.636816   38063 logs.go:282] 0 containers: []
	W1003 18:15:45.636825   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:45.636831   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:45.636897   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:45.663419   38063 cri.go:89] found id: ""
	I1003 18:15:45.663431   38063 logs.go:282] 0 containers: []
	W1003 18:15:45.663442   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:45.663446   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:45.663484   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:45.688326   38063 cri.go:89] found id: ""
	I1003 18:15:45.688340   38063 logs.go:282] 0 containers: []
	W1003 18:15:45.688346   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:45.688350   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:45.688390   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:45.713903   38063 cri.go:89] found id: ""
	I1003 18:15:45.713916   38063 logs.go:282] 0 containers: []
	W1003 18:15:45.713923   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:45.713929   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:45.713969   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:45.738540   38063 cri.go:89] found id: ""
	I1003 18:15:45.738554   38063 logs.go:282] 0 containers: []
	W1003 18:15:45.738560   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:45.738565   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:45.738626   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:45.763029   38063 cri.go:89] found id: ""
	I1003 18:15:45.763042   38063 logs.go:282] 0 containers: []
	W1003 18:15:45.763049   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:45.763054   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:45.763105   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:45.787593   38063 cri.go:89] found id: ""
	I1003 18:15:45.787605   38063 logs.go:282] 0 containers: []
	W1003 18:15:45.787613   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:45.787619   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:45.787628   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:45.814410   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:45.814426   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:45.879690   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:45.879708   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:45.890632   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:45.890646   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:45.945900   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:45.939503    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:45.940097    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:45.941591    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:45.942022    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:45.943469    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:45.939503    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:45.940097    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:45.941591    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:45.942022    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:45.943469    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:15:45.945911   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:45.945920   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:48.510685   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:48.520989   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:48.521030   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:48.545850   38063 cri.go:89] found id: ""
	I1003 18:15:48.545863   38063 logs.go:282] 0 containers: []
	W1003 18:15:48.545871   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:48.545875   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:48.545917   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:48.570678   38063 cri.go:89] found id: ""
	I1003 18:15:48.570691   38063 logs.go:282] 0 containers: []
	W1003 18:15:48.570699   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:48.570704   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:48.570758   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:48.594906   38063 cri.go:89] found id: ""
	I1003 18:15:48.594922   38063 logs.go:282] 0 containers: []
	W1003 18:15:48.594931   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:48.594936   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:48.595011   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:48.620934   38063 cri.go:89] found id: ""
	I1003 18:15:48.620951   38063 logs.go:282] 0 containers: []
	W1003 18:15:48.620958   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:48.620963   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:48.621033   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:48.645916   38063 cri.go:89] found id: ""
	I1003 18:15:48.645933   38063 logs.go:282] 0 containers: []
	W1003 18:15:48.645942   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:48.645947   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:48.646009   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:48.670919   38063 cri.go:89] found id: ""
	I1003 18:15:48.670932   38063 logs.go:282] 0 containers: []
	W1003 18:15:48.670939   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:48.670944   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:48.671004   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:48.695257   38063 cri.go:89] found id: ""
	I1003 18:15:48.695274   38063 logs.go:282] 0 containers: []
	W1003 18:15:48.695281   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:48.695289   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:48.695298   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:48.723183   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:48.723198   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:48.790906   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:48.790924   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:48.802517   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:48.802531   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:48.858274   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:48.851795    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:48.852286    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:48.853794    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:48.854187    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:48.855729    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:48.851795    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:48.852286    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:48.853794    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:48.854187    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:48.855729    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:15:48.858294   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:48.858309   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:51.418365   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:51.428790   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:51.428851   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:51.453214   38063 cri.go:89] found id: ""
	I1003 18:15:51.453228   38063 logs.go:282] 0 containers: []
	W1003 18:15:51.453235   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:51.453241   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:51.453302   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:51.478216   38063 cri.go:89] found id: ""
	I1003 18:15:51.478231   38063 logs.go:282] 0 containers: []
	W1003 18:15:51.478241   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:51.478247   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:51.478298   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:51.503301   38063 cri.go:89] found id: ""
	I1003 18:15:51.503316   38063 logs.go:282] 0 containers: []
	W1003 18:15:51.503322   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:51.503327   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:51.503368   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:51.528130   38063 cri.go:89] found id: ""
	I1003 18:15:51.528146   38063 logs.go:282] 0 containers: []
	W1003 18:15:51.528152   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:51.528157   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:51.528196   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:51.553046   38063 cri.go:89] found id: ""
	I1003 18:15:51.553076   38063 logs.go:282] 0 containers: []
	W1003 18:15:51.553084   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:51.553091   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:51.553133   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:51.577406   38063 cri.go:89] found id: ""
	I1003 18:15:51.577420   38063 logs.go:282] 0 containers: []
	W1003 18:15:51.577426   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:51.577432   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:51.577471   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:51.602068   38063 cri.go:89] found id: ""
	I1003 18:15:51.602084   38063 logs.go:282] 0 containers: []
	W1003 18:15:51.602092   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:51.602102   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:51.602114   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:51.629035   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:51.629051   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:51.697997   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:51.698016   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:51.710748   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:51.710769   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:51.764330   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:51.757745    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:51.758298    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:51.759850    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:51.760310    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:51.761740    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:51.757745    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:51.758298    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:51.759850    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:51.760310    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:51.761740    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:15:51.764338   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:51.764348   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:54.323078   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:54.333510   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:54.333559   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:54.357777   38063 cri.go:89] found id: ""
	I1003 18:15:54.357790   38063 logs.go:282] 0 containers: []
	W1003 18:15:54.357796   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:54.357800   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:54.357841   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:54.381421   38063 cri.go:89] found id: ""
	I1003 18:15:54.381435   38063 logs.go:282] 0 containers: []
	W1003 18:15:54.381442   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:54.381447   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:54.381495   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:54.404951   38063 cri.go:89] found id: ""
	I1003 18:15:54.404969   38063 logs.go:282] 0 containers: []
	W1003 18:15:54.404991   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:54.404999   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:54.405045   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:54.429154   38063 cri.go:89] found id: ""
	I1003 18:15:54.429172   38063 logs.go:282] 0 containers: []
	W1003 18:15:54.429181   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:54.429186   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:54.429224   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:54.452874   38063 cri.go:89] found id: ""
	I1003 18:15:54.452895   38063 logs.go:282] 0 containers: []
	W1003 18:15:54.452903   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:54.452907   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:54.452946   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:54.477916   38063 cri.go:89] found id: ""
	I1003 18:15:54.477929   38063 logs.go:282] 0 containers: []
	W1003 18:15:54.477937   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:54.477942   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:54.478001   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:54.503676   38063 cri.go:89] found id: ""
	I1003 18:15:54.503692   38063 logs.go:282] 0 containers: []
	W1003 18:15:54.503699   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:54.503706   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:54.503716   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:54.571451   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:54.571469   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:54.582598   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:54.582614   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:54.635288   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:54.629106    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:54.629524    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:54.631026    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:54.631408    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:54.632845    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:54.629106    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:54.629524    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:54.631026    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:54.631408    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:54.632845    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:15:54.635301   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:54.635338   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:54.693328   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:54.693348   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:57.224616   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:57.234873   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:57.234916   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:57.259150   38063 cri.go:89] found id: ""
	I1003 18:15:57.259164   38063 logs.go:282] 0 containers: []
	W1003 18:15:57.259170   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:57.259175   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:57.259224   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:57.282636   38063 cri.go:89] found id: ""
	I1003 18:15:57.282650   38063 logs.go:282] 0 containers: []
	W1003 18:15:57.282662   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:57.282667   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:57.282716   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:57.307774   38063 cri.go:89] found id: ""
	I1003 18:15:57.307792   38063 logs.go:282] 0 containers: []
	W1003 18:15:57.307800   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:57.307806   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:57.307846   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:57.331087   38063 cri.go:89] found id: ""
	I1003 18:15:57.331101   38063 logs.go:282] 0 containers: []
	W1003 18:15:57.331107   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:57.331112   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:57.331153   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:57.356108   38063 cri.go:89] found id: ""
	I1003 18:15:57.356125   38063 logs.go:282] 0 containers: []
	W1003 18:15:57.356200   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:57.356209   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:57.356267   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:57.381138   38063 cri.go:89] found id: ""
	I1003 18:15:57.381154   38063 logs.go:282] 0 containers: []
	W1003 18:15:57.381161   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:57.381166   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:57.381206   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:57.405322   38063 cri.go:89] found id: ""
	I1003 18:15:57.405339   38063 logs.go:282] 0 containers: []
	W1003 18:15:57.405345   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:57.405353   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:57.405362   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:57.463330   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:57.463345   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:57.491754   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:57.491771   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:57.557710   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:57.557727   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:57.569135   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:57.569150   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:57.622275   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:57.615880    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:57.616369    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:57.617874    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:57.618325    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:57.619768    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:57.615880    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:57.616369    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:57.617874    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:57.618325    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:57.619768    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:00.123157   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:00.133350   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:00.133393   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:00.157946   38063 cri.go:89] found id: ""
	I1003 18:16:00.157958   38063 logs.go:282] 0 containers: []
	W1003 18:16:00.157965   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:00.157970   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:00.158035   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:00.182943   38063 cri.go:89] found id: ""
	I1003 18:16:00.182956   38063 logs.go:282] 0 containers: []
	W1003 18:16:00.182962   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:00.182967   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:00.183026   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:00.206834   38063 cri.go:89] found id: ""
	I1003 18:16:00.206848   38063 logs.go:282] 0 containers: []
	W1003 18:16:00.206854   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:00.206858   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:00.206901   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:00.231944   38063 cri.go:89] found id: ""
	I1003 18:16:00.231959   38063 logs.go:282] 0 containers: []
	W1003 18:16:00.231965   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:00.231970   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:00.232027   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:00.257587   38063 cri.go:89] found id: ""
	I1003 18:16:00.257607   38063 logs.go:282] 0 containers: []
	W1003 18:16:00.257613   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:00.257619   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:00.257662   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:00.281667   38063 cri.go:89] found id: ""
	I1003 18:16:00.281683   38063 logs.go:282] 0 containers: []
	W1003 18:16:00.281690   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:00.281694   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:00.281735   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:00.306161   38063 cri.go:89] found id: ""
	I1003 18:16:00.306173   38063 logs.go:282] 0 containers: []
	W1003 18:16:00.306183   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:00.306189   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:00.306199   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:00.334078   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:00.334094   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:00.398782   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:00.398800   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:00.410100   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:00.410118   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:00.464563   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:00.458004    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:00.458485    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:00.459956    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:00.460373    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:00.461844    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:00.458004    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:00.458485    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:00.459956    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:00.460373    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:00.461844    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:00.464573   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:00.464584   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:03.025201   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:03.035449   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:03.035489   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:03.060615   38063 cri.go:89] found id: ""
	I1003 18:16:03.060629   38063 logs.go:282] 0 containers: []
	W1003 18:16:03.060638   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:03.060644   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:03.060695   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:03.085028   38063 cri.go:89] found id: ""
	I1003 18:16:03.085041   38063 logs.go:282] 0 containers: []
	W1003 18:16:03.085047   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:03.085052   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:03.085101   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:03.109281   38063 cri.go:89] found id: ""
	I1003 18:16:03.109295   38063 logs.go:282] 0 containers: []
	W1003 18:16:03.109301   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:03.109306   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:03.109343   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:03.133199   38063 cri.go:89] found id: ""
	I1003 18:16:03.133212   38063 logs.go:282] 0 containers: []
	W1003 18:16:03.133218   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:03.133223   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:03.133271   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:03.157142   38063 cri.go:89] found id: ""
	I1003 18:16:03.157158   38063 logs.go:282] 0 containers: []
	W1003 18:16:03.157167   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:03.157174   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:03.157215   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:03.181156   38063 cri.go:89] found id: ""
	I1003 18:16:03.181170   38063 logs.go:282] 0 containers: []
	W1003 18:16:03.181177   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:03.181182   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:03.181225   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:03.207371   38063 cri.go:89] found id: ""
	I1003 18:16:03.207385   38063 logs.go:282] 0 containers: []
	W1003 18:16:03.207392   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:03.207399   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:03.207407   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:03.268072   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:03.268093   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:03.295655   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:03.295675   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:03.359095   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:03.359116   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:03.370093   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:03.370110   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:03.423681   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:03.416458    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:03.416947    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:03.419089    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:03.419495    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:03.421012    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:03.416458    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:03.416947    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:03.419089    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:03.419495    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:03.421012    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:05.925327   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:05.935882   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:05.935927   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:05.960833   38063 cri.go:89] found id: ""
	I1003 18:16:05.960850   38063 logs.go:282] 0 containers: []
	W1003 18:16:05.960858   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:05.960864   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:05.960918   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:05.985562   38063 cri.go:89] found id: ""
	I1003 18:16:05.985577   38063 logs.go:282] 0 containers: []
	W1003 18:16:05.985585   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:05.985592   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:05.985644   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:06.008796   38063 cri.go:89] found id: ""
	I1003 18:16:06.008813   38063 logs.go:282] 0 containers: []
	W1003 18:16:06.008822   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:06.008827   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:06.008865   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:06.034023   38063 cri.go:89] found id: ""
	I1003 18:16:06.034037   38063 logs.go:282] 0 containers: []
	W1003 18:16:06.034043   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:06.034048   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:06.034099   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:06.057314   38063 cri.go:89] found id: ""
	I1003 18:16:06.057330   38063 logs.go:282] 0 containers: []
	W1003 18:16:06.057340   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:06.057347   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:06.057396   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:06.082843   38063 cri.go:89] found id: ""
	I1003 18:16:06.082859   38063 logs.go:282] 0 containers: []
	W1003 18:16:06.082865   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:06.082870   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:06.082921   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:06.106237   38063 cri.go:89] found id: ""
	I1003 18:16:06.106251   38063 logs.go:282] 0 containers: []
	W1003 18:16:06.106257   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:06.106264   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:06.106276   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:06.175390   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:06.175407   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:06.186550   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:06.186565   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:06.239490   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:06.233165    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:06.233624    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:06.235128    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:06.235537    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:06.237048    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:06.233165    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:06.233624    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:06.235128    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:06.235537    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:06.237048    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:06.239500   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:06.239513   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:06.301454   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:06.301474   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:08.830757   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:08.841156   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:08.841199   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:08.865562   38063 cri.go:89] found id: ""
	I1003 18:16:08.865578   38063 logs.go:282] 0 containers: []
	W1003 18:16:08.865584   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:08.865589   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:08.865636   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:08.889510   38063 cri.go:89] found id: ""
	I1003 18:16:08.889527   38063 logs.go:282] 0 containers: []
	W1003 18:16:08.889536   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:08.889543   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:08.889588   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:08.914125   38063 cri.go:89] found id: ""
	I1003 18:16:08.914140   38063 logs.go:282] 0 containers: []
	W1003 18:16:08.914146   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:08.914150   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:08.914195   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:08.937681   38063 cri.go:89] found id: ""
	I1003 18:16:08.937697   38063 logs.go:282] 0 containers: []
	W1003 18:16:08.937706   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:08.937711   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:08.937752   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:08.961970   38063 cri.go:89] found id: ""
	I1003 18:16:08.961998   38063 logs.go:282] 0 containers: []
	W1003 18:16:08.962006   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:08.962012   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:08.962073   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:08.986853   38063 cri.go:89] found id: ""
	I1003 18:16:08.986870   38063 logs.go:282] 0 containers: []
	W1003 18:16:08.986877   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:08.986883   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:08.986953   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:09.012531   38063 cri.go:89] found id: ""
	I1003 18:16:09.012547   38063 logs.go:282] 0 containers: []
	W1003 18:16:09.012555   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:09.012570   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:09.012581   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:09.078036   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:09.078053   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:09.088904   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:09.088918   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:09.143252   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:09.136367    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:09.136907    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:09.138514    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:09.139001    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:09.140648    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:09.136367    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:09.136907    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:09.138514    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:09.139001    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:09.140648    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:09.143263   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:09.143275   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:09.201869   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:09.201887   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:11.730105   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:11.740344   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:11.740384   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:11.765234   38063 cri.go:89] found id: ""
	I1003 18:16:11.765247   38063 logs.go:282] 0 containers: []
	W1003 18:16:11.765256   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:11.765261   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:11.765318   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:11.789130   38063 cri.go:89] found id: ""
	I1003 18:16:11.789143   38063 logs.go:282] 0 containers: []
	W1003 18:16:11.789149   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:11.789154   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:11.789198   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:11.815036   38063 cri.go:89] found id: ""
	I1003 18:16:11.815050   38063 logs.go:282] 0 containers: []
	W1003 18:16:11.815058   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:11.815064   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:11.815113   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:11.839467   38063 cri.go:89] found id: ""
	I1003 18:16:11.839483   38063 logs.go:282] 0 containers: []
	W1003 18:16:11.839490   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:11.839495   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:11.839539   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:11.863864   38063 cri.go:89] found id: ""
	I1003 18:16:11.863893   38063 logs.go:282] 0 containers: []
	W1003 18:16:11.863899   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:11.863904   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:11.863955   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:11.889464   38063 cri.go:89] found id: ""
	I1003 18:16:11.889480   38063 logs.go:282] 0 containers: []
	W1003 18:16:11.889488   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:11.889495   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:11.889535   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:11.912845   38063 cri.go:89] found id: ""
	I1003 18:16:11.912862   38063 logs.go:282] 0 containers: []
	W1003 18:16:11.912870   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:11.912880   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:11.912904   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:11.966773   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:11.959444    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:11.960161    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:11.961014    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:11.962530    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:11.962898    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:11.959444    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:11.960161    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:11.961014    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:11.962530    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:11.962898    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:11.966785   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:11.966795   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:12.025128   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:12.025146   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:12.053945   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:12.053960   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:12.119420   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:12.119438   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:14.631092   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:14.641283   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:14.641330   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:14.665808   38063 cri.go:89] found id: ""
	I1003 18:16:14.665821   38063 logs.go:282] 0 containers: []
	W1003 18:16:14.665827   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:14.665832   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:14.665874   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:14.690191   38063 cri.go:89] found id: ""
	I1003 18:16:14.690204   38063 logs.go:282] 0 containers: []
	W1003 18:16:14.690211   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:14.690216   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:14.690266   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:14.715586   38063 cri.go:89] found id: ""
	I1003 18:16:14.715598   38063 logs.go:282] 0 containers: []
	W1003 18:16:14.715619   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:14.715623   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:14.715677   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:14.740173   38063 cri.go:89] found id: ""
	I1003 18:16:14.740190   38063 logs.go:282] 0 containers: []
	W1003 18:16:14.740198   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:14.740202   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:14.740247   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:14.764574   38063 cri.go:89] found id: ""
	I1003 18:16:14.764589   38063 logs.go:282] 0 containers: []
	W1003 18:16:14.764595   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:14.764599   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:14.764653   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:14.788993   38063 cri.go:89] found id: ""
	I1003 18:16:14.789007   38063 logs.go:282] 0 containers: []
	W1003 18:16:14.789014   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:14.789018   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:14.789059   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:14.813679   38063 cri.go:89] found id: ""
	I1003 18:16:14.813692   38063 logs.go:282] 0 containers: []
	W1003 18:16:14.813699   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:14.813706   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:14.813715   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:14.840363   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:14.840378   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:14.906264   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:14.906280   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:14.917237   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:14.917251   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:14.971230   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:14.964471    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:14.965000    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:14.966522    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:14.966918    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:14.968491    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:14.964471    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:14.965000    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:14.966522    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:14.966918    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:14.968491    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:14.971246   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:14.971257   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:17.534133   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:17.544453   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:17.544502   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:17.568816   38063 cri.go:89] found id: ""
	I1003 18:16:17.568834   38063 logs.go:282] 0 containers: []
	W1003 18:16:17.568841   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:17.568847   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:17.568899   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:17.593442   38063 cri.go:89] found id: ""
	I1003 18:16:17.593460   38063 logs.go:282] 0 containers: []
	W1003 18:16:17.593466   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:17.593472   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:17.593515   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:17.617737   38063 cri.go:89] found id: ""
	I1003 18:16:17.617754   38063 logs.go:282] 0 containers: []
	W1003 18:16:17.617761   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:17.617766   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:17.617804   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:17.642180   38063 cri.go:89] found id: ""
	I1003 18:16:17.642194   38063 logs.go:282] 0 containers: []
	W1003 18:16:17.642201   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:17.642206   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:17.642250   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:17.666189   38063 cri.go:89] found id: ""
	I1003 18:16:17.666204   38063 logs.go:282] 0 containers: []
	W1003 18:16:17.666210   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:17.666214   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:17.666259   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:17.689273   38063 cri.go:89] found id: ""
	I1003 18:16:17.689289   38063 logs.go:282] 0 containers: []
	W1003 18:16:17.689297   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:17.689305   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:17.689345   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:17.714353   38063 cri.go:89] found id: ""
	I1003 18:16:17.714373   38063 logs.go:282] 0 containers: []
	W1003 18:16:17.714381   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:17.714394   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:17.714407   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:17.768746   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:17.762135    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:17.762597    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:17.764136    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:17.764533    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:17.766023    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:17.762135    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:17.762597    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:17.764136    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:17.764533    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:17.766023    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:17.768759   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:17.768768   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:17.830139   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:17.830159   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:17.858326   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:17.858342   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:17.922889   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:17.922911   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:20.435863   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:20.446321   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:20.446361   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:20.471731   38063 cri.go:89] found id: ""
	I1003 18:16:20.471743   38063 logs.go:282] 0 containers: []
	W1003 18:16:20.471749   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:20.471753   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:20.471792   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:20.495730   38063 cri.go:89] found id: ""
	I1003 18:16:20.495747   38063 logs.go:282] 0 containers: []
	W1003 18:16:20.495755   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:20.495760   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:20.495815   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:20.520555   38063 cri.go:89] found id: ""
	I1003 18:16:20.520572   38063 logs.go:282] 0 containers: []
	W1003 18:16:20.520581   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:20.520597   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:20.520650   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:20.545197   38063 cri.go:89] found id: ""
	I1003 18:16:20.545210   38063 logs.go:282] 0 containers: []
	W1003 18:16:20.545216   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:20.545220   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:20.545258   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:20.569113   38063 cri.go:89] found id: ""
	I1003 18:16:20.569126   38063 logs.go:282] 0 containers: []
	W1003 18:16:20.569132   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:20.569138   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:20.569189   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:20.593468   38063 cri.go:89] found id: ""
	I1003 18:16:20.593483   38063 logs.go:282] 0 containers: []
	W1003 18:16:20.593491   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:20.593496   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:20.593545   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:20.617852   38063 cri.go:89] found id: ""
	I1003 18:16:20.617865   38063 logs.go:282] 0 containers: []
	W1003 18:16:20.617872   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:20.617878   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:20.617887   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:20.680360   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:20.680379   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:20.691258   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:20.691271   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:20.745174   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:20.738655    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:20.739179    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:20.740672    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:20.741122    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:20.742610    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:20.738655    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:20.739179    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:20.740672    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:20.741122    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:20.742610    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:20.745187   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:20.745197   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:20.806835   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:20.806853   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:23.335788   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:23.346440   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:23.346505   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:23.371250   38063 cri.go:89] found id: ""
	I1003 18:16:23.371263   38063 logs.go:282] 0 containers: []
	W1003 18:16:23.371269   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:23.371273   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:23.371315   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:23.396570   38063 cri.go:89] found id: ""
	I1003 18:16:23.396585   38063 logs.go:282] 0 containers: []
	W1003 18:16:23.396592   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:23.396596   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:23.396646   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:23.420703   38063 cri.go:89] found id: ""
	I1003 18:16:23.420718   38063 logs.go:282] 0 containers: []
	W1003 18:16:23.420728   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:23.420735   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:23.420783   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:23.445294   38063 cri.go:89] found id: ""
	I1003 18:16:23.445310   38063 logs.go:282] 0 containers: []
	W1003 18:16:23.445319   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:23.445326   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:23.445372   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:23.470082   38063 cri.go:89] found id: ""
	I1003 18:16:23.470100   38063 logs.go:282] 0 containers: []
	W1003 18:16:23.470106   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:23.470110   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:23.470148   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:23.494417   38063 cri.go:89] found id: ""
	I1003 18:16:23.494432   38063 logs.go:282] 0 containers: []
	W1003 18:16:23.494441   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:23.494446   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:23.494489   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:23.519492   38063 cri.go:89] found id: ""
	I1003 18:16:23.519507   38063 logs.go:282] 0 containers: []
	W1003 18:16:23.519516   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:23.519526   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:23.519538   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:23.583328   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:23.583346   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:23.594696   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:23.594710   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:23.649094   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:23.642344    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:23.642882    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:23.644368    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:23.644805    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:23.646275    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:23.642344    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:23.642882    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:23.644368    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:23.644805    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:23.646275    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:23.649104   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:23.649113   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:23.710665   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:23.710684   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:26.239439   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:26.250313   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:26.250355   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:26.275460   38063 cri.go:89] found id: ""
	I1003 18:16:26.275476   38063 logs.go:282] 0 containers: []
	W1003 18:16:26.275484   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:26.275490   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:26.275544   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:26.300685   38063 cri.go:89] found id: ""
	I1003 18:16:26.300701   38063 logs.go:282] 0 containers: []
	W1003 18:16:26.300710   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:26.300716   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:26.300760   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:26.324124   38063 cri.go:89] found id: ""
	I1003 18:16:26.324141   38063 logs.go:282] 0 containers: []
	W1003 18:16:26.324150   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:26.324156   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:26.324203   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:26.349331   38063 cri.go:89] found id: ""
	I1003 18:16:26.349348   38063 logs.go:282] 0 containers: []
	W1003 18:16:26.349357   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:26.349363   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:26.349407   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:26.373924   38063 cri.go:89] found id: ""
	I1003 18:16:26.373938   38063 logs.go:282] 0 containers: []
	W1003 18:16:26.373944   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:26.373948   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:26.374020   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:26.398561   38063 cri.go:89] found id: ""
	I1003 18:16:26.398575   38063 logs.go:282] 0 containers: []
	W1003 18:16:26.398581   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:26.398593   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:26.398637   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:26.423043   38063 cri.go:89] found id: ""
	I1003 18:16:26.423055   38063 logs.go:282] 0 containers: []
	W1003 18:16:26.423064   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:26.423073   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:26.423085   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:26.448940   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:26.448957   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:26.514345   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:26.514362   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:26.525206   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:26.525218   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:26.579573   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:26.572848    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:26.573316    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:26.574821    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:26.575280    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:26.576738    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:26.572848    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:26.573316    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:26.574821    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:26.575280    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:26.576738    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:26.579590   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:26.579599   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:29.139399   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:29.149491   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:29.149546   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:29.174745   38063 cri.go:89] found id: ""
	I1003 18:16:29.174759   38063 logs.go:282] 0 containers: []
	W1003 18:16:29.174764   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:29.174769   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:29.174809   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:29.199728   38063 cri.go:89] found id: ""
	I1003 18:16:29.199741   38063 logs.go:282] 0 containers: []
	W1003 18:16:29.199747   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:29.199752   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:29.199803   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:29.225114   38063 cri.go:89] found id: ""
	I1003 18:16:29.225130   38063 logs.go:282] 0 containers: []
	W1003 18:16:29.225139   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:29.225145   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:29.225208   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:29.249942   38063 cri.go:89] found id: ""
	I1003 18:16:29.249959   38063 logs.go:282] 0 containers: []
	W1003 18:16:29.249968   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:29.249990   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:29.250054   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:29.274658   38063 cri.go:89] found id: ""
	I1003 18:16:29.274676   38063 logs.go:282] 0 containers: []
	W1003 18:16:29.274684   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:29.274690   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:29.274740   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:29.299132   38063 cri.go:89] found id: ""
	I1003 18:16:29.299147   38063 logs.go:282] 0 containers: []
	W1003 18:16:29.299153   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:29.299159   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:29.299207   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:29.323399   38063 cri.go:89] found id: ""
	I1003 18:16:29.323414   38063 logs.go:282] 0 containers: []
	W1003 18:16:29.323420   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:29.323427   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:29.323436   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:29.388896   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:29.388919   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:29.400252   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:29.400267   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:29.453553   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:29.447303    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:29.447746    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:29.449289    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:29.449640    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:29.451133    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:29.447303    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:29.447746    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:29.449289    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:29.449640    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:29.451133    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:29.453604   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:29.453615   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:29.515234   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:29.515257   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:32.045106   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:32.055516   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:32.055563   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:32.081412   38063 cri.go:89] found id: ""
	I1003 18:16:32.081425   38063 logs.go:282] 0 containers: []
	W1003 18:16:32.081431   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:32.081436   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:32.081476   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:32.106569   38063 cri.go:89] found id: ""
	I1003 18:16:32.106585   38063 logs.go:282] 0 containers: []
	W1003 18:16:32.106591   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:32.106595   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:32.106634   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:32.131668   38063 cri.go:89] found id: ""
	I1003 18:16:32.131684   38063 logs.go:282] 0 containers: []
	W1003 18:16:32.131692   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:32.131699   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:32.131745   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:32.156465   38063 cri.go:89] found id: ""
	I1003 18:16:32.156479   38063 logs.go:282] 0 containers: []
	W1003 18:16:32.156485   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:32.156490   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:32.156566   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:32.181247   38063 cri.go:89] found id: ""
	I1003 18:16:32.181260   38063 logs.go:282] 0 containers: []
	W1003 18:16:32.181267   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:32.181271   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:32.181314   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:32.205219   38063 cri.go:89] found id: ""
	I1003 18:16:32.205236   38063 logs.go:282] 0 containers: []
	W1003 18:16:32.205245   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:32.205252   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:32.205305   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:32.229751   38063 cri.go:89] found id: ""
	I1003 18:16:32.229767   38063 logs.go:282] 0 containers: []
	W1003 18:16:32.229776   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:32.229785   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:32.229797   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:32.257251   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:32.257266   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:32.325308   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:32.325326   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:32.336569   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:32.336584   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:32.391680   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:32.384542    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:32.385163    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:32.386741    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:32.387204    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:32.388820    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:32.384542    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:32.385163    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:32.386741    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:32.387204    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:32.388820    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:32.391693   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:32.391706   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:34.954303   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:34.965018   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:34.965070   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:34.990955   38063 cri.go:89] found id: ""
	I1003 18:16:34.990970   38063 logs.go:282] 0 containers: []
	W1003 18:16:34.990992   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:34.990999   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:34.991061   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:35.015676   38063 cri.go:89] found id: ""
	I1003 18:16:35.015689   38063 logs.go:282] 0 containers: []
	W1003 18:16:35.015695   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:35.015699   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:35.015737   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:35.040155   38063 cri.go:89] found id: ""
	I1003 18:16:35.040168   38063 logs.go:282] 0 containers: []
	W1003 18:16:35.040174   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:35.040179   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:35.040218   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:35.065569   38063 cri.go:89] found id: ""
	I1003 18:16:35.065587   38063 logs.go:282] 0 containers: []
	W1003 18:16:35.065596   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:35.065602   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:35.065663   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:35.090276   38063 cri.go:89] found id: ""
	I1003 18:16:35.090288   38063 logs.go:282] 0 containers: []
	W1003 18:16:35.090295   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:35.090299   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:35.090339   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:35.114581   38063 cri.go:89] found id: ""
	I1003 18:16:35.114617   38063 logs.go:282] 0 containers: []
	W1003 18:16:35.114627   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:35.114633   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:35.114688   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:35.139719   38063 cri.go:89] found id: ""
	I1003 18:16:35.139734   38063 logs.go:282] 0 containers: []
	W1003 18:16:35.139744   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:35.139753   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:35.139766   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:35.205015   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:35.205034   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:35.216021   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:35.216039   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:35.269655   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:35.262830    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:35.263341    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:35.264897    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:35.265346    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:35.266885    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:35.262830    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:35.263341    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:35.264897    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:35.265346    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:35.266885    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:35.269664   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:35.269674   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:35.330604   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:35.330634   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:37.861503   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:37.871534   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:37.871641   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:37.895946   38063 cri.go:89] found id: ""
	I1003 18:16:37.895961   38063 logs.go:282] 0 containers: []
	W1003 18:16:37.895971   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:37.895995   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:37.896048   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:37.921286   38063 cri.go:89] found id: ""
	I1003 18:16:37.921301   38063 logs.go:282] 0 containers: []
	W1003 18:16:37.921308   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:37.921314   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:37.921364   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:37.946115   38063 cri.go:89] found id: ""
	I1003 18:16:37.946131   38063 logs.go:282] 0 containers: []
	W1003 18:16:37.946141   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:37.946148   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:37.946194   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:37.970857   38063 cri.go:89] found id: ""
	I1003 18:16:37.970871   38063 logs.go:282] 0 containers: []
	W1003 18:16:37.970878   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:37.970882   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:37.970930   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:37.997387   38063 cri.go:89] found id: ""
	I1003 18:16:37.997405   38063 logs.go:282] 0 containers: []
	W1003 18:16:37.997412   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:37.997416   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:37.997459   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:38.022848   38063 cri.go:89] found id: ""
	I1003 18:16:38.022862   38063 logs.go:282] 0 containers: []
	W1003 18:16:38.022869   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:38.022874   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:38.022938   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:38.048588   38063 cri.go:89] found id: ""
	I1003 18:16:38.048624   38063 logs.go:282] 0 containers: []
	W1003 18:16:38.048632   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:38.048640   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:38.048653   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:38.110031   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:38.110050   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:38.137498   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:38.137513   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:38.203958   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:38.203994   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:38.215727   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:38.215744   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:38.269765   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:38.263066    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:38.263531    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:38.265220    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:38.265597    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:38.267129    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:38.263066    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:38.263531    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:38.265220    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:38.265597    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:38.267129    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:40.770413   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:40.780831   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:40.780874   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:40.804826   38063 cri.go:89] found id: ""
	I1003 18:16:40.804839   38063 logs.go:282] 0 containers: []
	W1003 18:16:40.804845   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:40.804850   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:40.804890   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:40.830833   38063 cri.go:89] found id: ""
	I1003 18:16:40.830850   38063 logs.go:282] 0 containers: []
	W1003 18:16:40.830858   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:40.830864   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:40.830930   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:40.856650   38063 cri.go:89] found id: ""
	I1003 18:16:40.856669   38063 logs.go:282] 0 containers: []
	W1003 18:16:40.856677   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:40.856693   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:40.856748   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:40.881236   38063 cri.go:89] found id: ""
	I1003 18:16:40.881250   38063 logs.go:282] 0 containers: []
	W1003 18:16:40.881256   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:40.881261   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:40.881301   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:40.905820   38063 cri.go:89] found id: ""
	I1003 18:16:40.905836   38063 logs.go:282] 0 containers: []
	W1003 18:16:40.905843   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:40.905849   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:40.905900   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:40.931504   38063 cri.go:89] found id: ""
	I1003 18:16:40.931520   38063 logs.go:282] 0 containers: []
	W1003 18:16:40.931527   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:40.931532   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:40.931583   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:40.957539   38063 cri.go:89] found id: ""
	I1003 18:16:40.957553   38063 logs.go:282] 0 containers: []
	W1003 18:16:40.957560   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:40.957567   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:40.957578   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:41.015948   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:41.015969   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:41.044701   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:41.044726   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:41.112388   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:41.112406   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:41.123384   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:41.123399   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:41.177789   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:41.171080    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:41.171701    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:41.173280    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:41.173749    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:41.175246    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:41.171080    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:41.171701    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:41.173280    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:41.173749    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:41.175246    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:43.679496   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:43.689800   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:43.689843   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:43.714130   38063 cri.go:89] found id: ""
	I1003 18:16:43.714145   38063 logs.go:282] 0 containers: []
	W1003 18:16:43.714152   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:43.714156   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:43.714197   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:43.738900   38063 cri.go:89] found id: ""
	I1003 18:16:43.738916   38063 logs.go:282] 0 containers: []
	W1003 18:16:43.738924   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:43.738929   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:43.738972   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:43.763822   38063 cri.go:89] found id: ""
	I1003 18:16:43.763835   38063 logs.go:282] 0 containers: []
	W1003 18:16:43.763841   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:43.763845   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:43.763884   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:43.789103   38063 cri.go:89] found id: ""
	I1003 18:16:43.789120   38063 logs.go:282] 0 containers: []
	W1003 18:16:43.789128   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:43.789134   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:43.789187   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:43.813436   38063 cri.go:89] found id: ""
	I1003 18:16:43.813447   38063 logs.go:282] 0 containers: []
	W1003 18:16:43.813455   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:43.813460   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:43.813513   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:43.838306   38063 cri.go:89] found id: ""
	I1003 18:16:43.838322   38063 logs.go:282] 0 containers: []
	W1003 18:16:43.838331   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:43.838338   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:43.838382   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:43.863413   38063 cri.go:89] found id: ""
	I1003 18:16:43.863429   38063 logs.go:282] 0 containers: []
	W1003 18:16:43.863435   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:43.863442   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:43.863451   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:43.931299   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:43.931317   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:43.942307   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:43.942321   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:43.997476   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:43.990626    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:43.991191    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:43.992711    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:43.993154    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:43.994633    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:43.990626    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:43.991191    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:43.992711    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:43.993154    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:43.994633    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:43.997488   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:43.997500   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:44.053446   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:44.053464   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:46.583423   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:46.593663   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:46.593719   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:46.618188   38063 cri.go:89] found id: ""
	I1003 18:16:46.618202   38063 logs.go:282] 0 containers: []
	W1003 18:16:46.618208   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:46.618213   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:46.618250   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:46.642929   38063 cri.go:89] found id: ""
	I1003 18:16:46.642943   38063 logs.go:282] 0 containers: []
	W1003 18:16:46.642949   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:46.642954   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:46.643015   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:46.667745   38063 cri.go:89] found id: ""
	I1003 18:16:46.667761   38063 logs.go:282] 0 containers: []
	W1003 18:16:46.667770   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:46.667775   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:46.667818   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:46.692080   38063 cri.go:89] found id: ""
	I1003 18:16:46.692092   38063 logs.go:282] 0 containers: []
	W1003 18:16:46.692098   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:46.692102   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:46.692140   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:46.716789   38063 cri.go:89] found id: ""
	I1003 18:16:46.716807   38063 logs.go:282] 0 containers: []
	W1003 18:16:46.716816   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:46.716822   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:46.716867   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:46.741361   38063 cri.go:89] found id: ""
	I1003 18:16:46.741375   38063 logs.go:282] 0 containers: []
	W1003 18:16:46.741382   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:46.741389   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:46.741437   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:46.765330   38063 cri.go:89] found id: ""
	I1003 18:16:46.765343   38063 logs.go:282] 0 containers: []
	W1003 18:16:46.765349   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:46.765357   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:46.765368   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:46.830366   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:46.830385   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:46.841266   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:46.841279   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:46.894396   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:46.888072    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:46.888542    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:46.890079    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:46.890459    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:46.891950    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:46.888072    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:46.888542    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:46.890079    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:46.890459    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:46.891950    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:46.894415   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:46.894426   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:46.954277   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:46.954295   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:49.482413   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:49.492881   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:49.492921   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:49.516075   38063 cri.go:89] found id: ""
	I1003 18:16:49.516093   38063 logs.go:282] 0 containers: []
	W1003 18:16:49.516102   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:49.516108   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:49.516154   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:49.542911   38063 cri.go:89] found id: ""
	I1003 18:16:49.542928   38063 logs.go:282] 0 containers: []
	W1003 18:16:49.542936   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:49.542940   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:49.543006   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:49.568965   38063 cri.go:89] found id: ""
	I1003 18:16:49.568996   38063 logs.go:282] 0 containers: []
	W1003 18:16:49.569005   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:49.569009   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:49.569055   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:49.593221   38063 cri.go:89] found id: ""
	I1003 18:16:49.593238   38063 logs.go:282] 0 containers: []
	W1003 18:16:49.593246   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:49.593251   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:49.593302   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:49.618807   38063 cri.go:89] found id: ""
	I1003 18:16:49.618824   38063 logs.go:282] 0 containers: []
	W1003 18:16:49.618831   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:49.618848   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:49.618893   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:49.642342   38063 cri.go:89] found id: ""
	I1003 18:16:49.642357   38063 logs.go:282] 0 containers: []
	W1003 18:16:49.642363   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:49.642368   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:49.642407   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:49.666474   38063 cri.go:89] found id: ""
	I1003 18:16:49.666488   38063 logs.go:282] 0 containers: []
	W1003 18:16:49.666494   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:49.666502   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:49.666513   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:49.722457   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:49.722476   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:49.750153   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:49.750170   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:49.814369   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:49.814387   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:49.825405   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:49.825418   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:49.879924   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:49.873380    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:49.873871    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:49.875556    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:49.876003    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:49.877459    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:49.873380    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:49.873871    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:49.875556    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:49.876003    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:49.877459    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:52.380662   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:52.391022   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:52.391066   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:52.414399   38063 cri.go:89] found id: ""
	I1003 18:16:52.414416   38063 logs.go:282] 0 containers: []
	W1003 18:16:52.414423   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:52.414428   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:52.414466   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:52.438285   38063 cri.go:89] found id: ""
	I1003 18:16:52.438301   38063 logs.go:282] 0 containers: []
	W1003 18:16:52.438308   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:52.438312   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:52.438352   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:52.463204   38063 cri.go:89] found id: ""
	I1003 18:16:52.463218   38063 logs.go:282] 0 containers: []
	W1003 18:16:52.463224   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:52.463229   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:52.463271   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:52.487579   38063 cri.go:89] found id: ""
	I1003 18:16:52.487593   38063 logs.go:282] 0 containers: []
	W1003 18:16:52.487598   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:52.487605   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:52.487658   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:52.512643   38063 cri.go:89] found id: ""
	I1003 18:16:52.512657   38063 logs.go:282] 0 containers: []
	W1003 18:16:52.512663   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:52.512667   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:52.512705   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:52.538897   38063 cri.go:89] found id: ""
	I1003 18:16:52.538913   38063 logs.go:282] 0 containers: []
	W1003 18:16:52.538920   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:52.538926   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:52.538970   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:52.563277   38063 cri.go:89] found id: ""
	I1003 18:16:52.563294   38063 logs.go:282] 0 containers: []
	W1003 18:16:52.563302   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:52.563310   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:52.563321   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:52.622624   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:52.622642   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:52.650058   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:52.650074   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:52.714242   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:52.714261   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:52.725305   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:52.725319   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:52.777801   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:52.771320   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:52.772111   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:52.773166   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:52.773579   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:52.775090   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:52.771320   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:52.772111   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:52.773166   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:52.773579   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:52.775090   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:55.279440   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:55.290117   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:55.290161   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:55.315904   38063 cri.go:89] found id: ""
	I1003 18:16:55.315920   38063 logs.go:282] 0 containers: []
	W1003 18:16:55.315926   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:55.315930   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:55.315996   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:55.340568   38063 cri.go:89] found id: ""
	I1003 18:16:55.340582   38063 logs.go:282] 0 containers: []
	W1003 18:16:55.340588   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:55.340593   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:55.340631   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:55.365911   38063 cri.go:89] found id: ""
	I1003 18:16:55.365927   38063 logs.go:282] 0 containers: []
	W1003 18:16:55.365937   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:55.365943   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:55.366003   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:55.390838   38063 cri.go:89] found id: ""
	I1003 18:16:55.390855   38063 logs.go:282] 0 containers: []
	W1003 18:16:55.390864   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:55.390870   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:55.390924   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:55.414625   38063 cri.go:89] found id: ""
	I1003 18:16:55.414638   38063 logs.go:282] 0 containers: []
	W1003 18:16:55.414651   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:55.414657   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:55.414712   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:55.438460   38063 cri.go:89] found id: ""
	I1003 18:16:55.438474   38063 logs.go:282] 0 containers: []
	W1003 18:16:55.438480   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:55.438484   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:55.438522   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:55.463131   38063 cri.go:89] found id: ""
	I1003 18:16:55.463148   38063 logs.go:282] 0 containers: []
	W1003 18:16:55.463156   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:55.463165   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:55.463176   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:55.516949   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:55.510276   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:55.510824   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:55.512379   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:55.512767   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:55.514262   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:55.510276   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:55.510824   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:55.512379   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:55.512767   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:55.514262   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:55.516958   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:55.516968   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:55.573992   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:55.574010   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:55.601928   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:55.601944   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:55.667452   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:55.667470   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:58.180268   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:58.190896   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:58.190942   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:58.215802   38063 cri.go:89] found id: ""
	I1003 18:16:58.215820   38063 logs.go:282] 0 containers: []
	W1003 18:16:58.215828   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:58.215835   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:58.215885   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:58.240607   38063 cri.go:89] found id: ""
	I1003 18:16:58.240623   38063 logs.go:282] 0 containers: []
	W1003 18:16:58.240632   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:58.240638   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:58.240719   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:58.264676   38063 cri.go:89] found id: ""
	I1003 18:16:58.264689   38063 logs.go:282] 0 containers: []
	W1003 18:16:58.264696   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:58.264703   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:58.264742   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:58.289482   38063 cri.go:89] found id: ""
	I1003 18:16:58.289496   38063 logs.go:282] 0 containers: []
	W1003 18:16:58.289502   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:58.289507   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:58.289558   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:58.314683   38063 cri.go:89] found id: ""
	I1003 18:16:58.314699   38063 logs.go:282] 0 containers: []
	W1003 18:16:58.314708   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:58.314714   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:58.314763   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:58.340874   38063 cri.go:89] found id: ""
	I1003 18:16:58.340900   38063 logs.go:282] 0 containers: []
	W1003 18:16:58.340910   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:58.340918   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:58.340989   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:58.365744   38063 cri.go:89] found id: ""
	I1003 18:16:58.365765   38063 logs.go:282] 0 containers: []
	W1003 18:16:58.365774   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:58.365785   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:58.365798   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:58.424919   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:58.424938   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:58.452107   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:58.452122   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:58.516078   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:58.516098   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:58.527186   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:58.527200   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:58.581397   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:58.574853   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:58.575363   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:58.576868   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:58.577319   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:58.578848   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:58.574853   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:58.575363   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:58.576868   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:58.577319   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:58.578848   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:01.083146   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:01.093268   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:01.093310   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:01.117816   38063 cri.go:89] found id: ""
	I1003 18:17:01.117833   38063 logs.go:282] 0 containers: []
	W1003 18:17:01.117840   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:01.117844   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:01.117882   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:01.141987   38063 cri.go:89] found id: ""
	I1003 18:17:01.142004   38063 logs.go:282] 0 containers: []
	W1003 18:17:01.142012   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:01.142018   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:01.142057   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:01.165255   38063 cri.go:89] found id: ""
	I1003 18:17:01.165271   38063 logs.go:282] 0 containers: []
	W1003 18:17:01.165277   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:01.165282   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:01.165323   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:01.189244   38063 cri.go:89] found id: ""
	I1003 18:17:01.189257   38063 logs.go:282] 0 containers: []
	W1003 18:17:01.189264   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:01.189269   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:01.189310   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:01.213365   38063 cri.go:89] found id: ""
	I1003 18:17:01.213381   38063 logs.go:282] 0 containers: []
	W1003 18:17:01.213388   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:01.213395   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:01.213442   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:01.240957   38063 cri.go:89] found id: ""
	I1003 18:17:01.240972   38063 logs.go:282] 0 containers: []
	W1003 18:17:01.241000   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:01.241007   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:01.241051   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:01.267290   38063 cri.go:89] found id: ""
	I1003 18:17:01.267306   38063 logs.go:282] 0 containers: []
	W1003 18:17:01.267312   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:01.267320   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:01.267331   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:01.295273   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:01.295290   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:01.364816   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:01.364836   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:01.376420   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:01.376437   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:01.432587   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:01.425391   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:01.425950   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:01.427491   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:01.428036   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:01.429594   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:01.425391   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:01.425950   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:01.427491   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:01.428036   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:01.429594   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:01.432599   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:01.432613   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:03.992551   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:04.002736   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:04.002789   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:04.027153   38063 cri.go:89] found id: ""
	I1003 18:17:04.027169   38063 logs.go:282] 0 containers: []
	W1003 18:17:04.027177   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:04.027183   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:04.027240   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:04.052384   38063 cri.go:89] found id: ""
	I1003 18:17:04.052399   38063 logs.go:282] 0 containers: []
	W1003 18:17:04.052406   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:04.052411   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:04.052458   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:04.077210   38063 cri.go:89] found id: ""
	I1003 18:17:04.077225   38063 logs.go:282] 0 containers: []
	W1003 18:17:04.077233   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:04.077243   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:04.077298   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:04.102192   38063 cri.go:89] found id: ""
	I1003 18:17:04.102208   38063 logs.go:282] 0 containers: []
	W1003 18:17:04.102217   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:04.102223   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:04.102266   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:04.126632   38063 cri.go:89] found id: ""
	I1003 18:17:04.126647   38063 logs.go:282] 0 containers: []
	W1003 18:17:04.126653   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:04.126658   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:04.126700   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:04.152736   38063 cri.go:89] found id: ""
	I1003 18:17:04.152752   38063 logs.go:282] 0 containers: []
	W1003 18:17:04.152761   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:04.152768   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:04.152814   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:04.177062   38063 cri.go:89] found id: ""
	I1003 18:17:04.177080   38063 logs.go:282] 0 containers: []
	W1003 18:17:04.177089   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:04.177099   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:04.177112   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:04.188211   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:04.188225   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:04.242641   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:04.235414   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:04.235943   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:04.237902   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:04.238634   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:04.240168   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:04.235414   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:04.235943   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:04.237902   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:04.238634   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:04.240168   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:04.242649   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:04.242661   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:04.302342   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:04.302368   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:04.330691   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:04.330717   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:06.899448   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:06.909768   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:06.909813   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:06.934090   38063 cri.go:89] found id: ""
	I1003 18:17:06.934103   38063 logs.go:282] 0 containers: []
	W1003 18:17:06.934109   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:06.934114   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:06.934152   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:06.958320   38063 cri.go:89] found id: ""
	I1003 18:17:06.958334   38063 logs.go:282] 0 containers: []
	W1003 18:17:06.958340   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:06.958343   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:06.958381   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:06.984766   38063 cri.go:89] found id: ""
	I1003 18:17:06.984783   38063 logs.go:282] 0 containers: []
	W1003 18:17:06.984792   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:06.984797   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:06.984857   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:07.011801   38063 cri.go:89] found id: ""
	I1003 18:17:07.011818   38063 logs.go:282] 0 containers: []
	W1003 18:17:07.011827   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:07.011832   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:07.011871   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:07.036323   38063 cri.go:89] found id: ""
	I1003 18:17:07.036339   38063 logs.go:282] 0 containers: []
	W1003 18:17:07.036347   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:07.036352   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:07.036402   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:07.061101   38063 cri.go:89] found id: ""
	I1003 18:17:07.061117   38063 logs.go:282] 0 containers: []
	W1003 18:17:07.061126   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:07.061134   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:07.061184   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:07.085274   38063 cri.go:89] found id: ""
	I1003 18:17:07.085286   38063 logs.go:282] 0 containers: []
	W1003 18:17:07.085293   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:07.085300   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:07.085309   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:07.146317   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:07.146334   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:07.175088   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:07.175102   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:07.243716   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:07.243735   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:07.255174   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:07.255190   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:07.308657   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:07.302083   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:07.302582   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:07.304157   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:07.304555   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:07.306037   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:07.302083   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:07.302582   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:07.304157   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:07.304555   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:07.306037   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:09.809372   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:09.819499   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:09.819542   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:09.844409   38063 cri.go:89] found id: ""
	I1003 18:17:09.844423   38063 logs.go:282] 0 containers: []
	W1003 18:17:09.844435   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:09.844439   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:09.844478   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:09.868767   38063 cri.go:89] found id: ""
	I1003 18:17:09.868781   38063 logs.go:282] 0 containers: []
	W1003 18:17:09.868787   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:09.868791   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:09.868832   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:09.891798   38063 cri.go:89] found id: ""
	I1003 18:17:09.891810   38063 logs.go:282] 0 containers: []
	W1003 18:17:09.891817   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:09.891821   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:09.891858   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:09.917378   38063 cri.go:89] found id: ""
	I1003 18:17:09.917393   38063 logs.go:282] 0 containers: []
	W1003 18:17:09.917399   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:09.917405   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:09.917450   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:09.942686   38063 cri.go:89] found id: ""
	I1003 18:17:09.942699   38063 logs.go:282] 0 containers: []
	W1003 18:17:09.942705   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:09.942710   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:09.942750   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:09.966104   38063 cri.go:89] found id: ""
	I1003 18:17:09.966117   38063 logs.go:282] 0 containers: []
	W1003 18:17:09.966123   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:09.966128   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:09.966166   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:09.993525   38063 cri.go:89] found id: ""
	I1003 18:17:09.993538   38063 logs.go:282] 0 containers: []
	W1003 18:17:09.993544   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:09.993551   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:09.993560   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:10.062246   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:10.062265   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:10.074081   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:10.074098   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:10.128788   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:10.122249   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:10.122773   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:10.124287   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:10.124702   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:10.126163   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:10.122249   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:10.122773   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:10.124287   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:10.124702   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:10.126163   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:10.128809   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:10.128820   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:10.186632   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:10.186649   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:12.716320   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:12.726641   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:12.726693   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:12.750384   38063 cri.go:89] found id: ""
	I1003 18:17:12.750397   38063 logs.go:282] 0 containers: []
	W1003 18:17:12.750403   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:12.750407   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:12.750446   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:12.775313   38063 cri.go:89] found id: ""
	I1003 18:17:12.775330   38063 logs.go:282] 0 containers: []
	W1003 18:17:12.775338   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:12.775344   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:12.775384   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:12.800228   38063 cri.go:89] found id: ""
	I1003 18:17:12.800244   38063 logs.go:282] 0 containers: []
	W1003 18:17:12.800251   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:12.800256   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:12.800298   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:12.825275   38063 cri.go:89] found id: ""
	I1003 18:17:12.825291   38063 logs.go:282] 0 containers: []
	W1003 18:17:12.825300   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:12.825317   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:12.825372   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:12.849255   38063 cri.go:89] found id: ""
	I1003 18:17:12.849271   38063 logs.go:282] 0 containers: []
	W1003 18:17:12.849279   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:12.849285   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:12.849336   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:12.873407   38063 cri.go:89] found id: ""
	I1003 18:17:12.873421   38063 logs.go:282] 0 containers: []
	W1003 18:17:12.873427   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:12.873431   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:12.873482   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:12.896762   38063 cri.go:89] found id: ""
	I1003 18:17:12.896778   38063 logs.go:282] 0 containers: []
	W1003 18:17:12.896786   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:12.896795   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:12.896807   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:12.960955   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:12.960983   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:12.972163   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:12.972178   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:13.025479   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:13.018959   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:13.019441   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:13.020904   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:13.021379   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:13.022868   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:13.018959   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:13.019441   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:13.020904   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:13.021379   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:13.022868   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:13.025493   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:13.025506   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:13.086473   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:13.086491   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:15.616095   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:15.626385   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:15.626428   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:15.650771   38063 cri.go:89] found id: ""
	I1003 18:17:15.650785   38063 logs.go:282] 0 containers: []
	W1003 18:17:15.650792   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:15.650796   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:15.650837   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:15.675587   38063 cri.go:89] found id: ""
	I1003 18:17:15.675629   38063 logs.go:282] 0 containers: []
	W1003 18:17:15.675637   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:15.675643   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:15.675705   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:15.699653   38063 cri.go:89] found id: ""
	I1003 18:17:15.699667   38063 logs.go:282] 0 containers: []
	W1003 18:17:15.699673   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:15.699677   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:15.699716   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:15.724414   38063 cri.go:89] found id: ""
	I1003 18:17:15.724427   38063 logs.go:282] 0 containers: []
	W1003 18:17:15.724435   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:15.724441   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:15.724496   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:15.749056   38063 cri.go:89] found id: ""
	I1003 18:17:15.749069   38063 logs.go:282] 0 containers: []
	W1003 18:17:15.749077   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:15.749082   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:15.749123   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:15.773830   38063 cri.go:89] found id: ""
	I1003 18:17:15.773846   38063 logs.go:282] 0 containers: []
	W1003 18:17:15.773859   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:15.773864   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:15.773907   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:15.798104   38063 cri.go:89] found id: ""
	I1003 18:17:15.798120   38063 logs.go:282] 0 containers: []
	W1003 18:17:15.798126   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:15.798133   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:15.798143   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:15.851960   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:15.845372   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:15.845936   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:15.847479   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:15.847794   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:15.849288   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:15.845372   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:15.845936   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:15.847479   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:15.847794   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:15.849288   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:15.851990   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:15.852005   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:15.909042   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:15.909059   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:15.936198   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:15.936212   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:16.001546   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:16.001563   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:18.514268   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:18.524824   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:18.524867   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:18.549240   38063 cri.go:89] found id: ""
	I1003 18:17:18.549252   38063 logs.go:282] 0 containers: []
	W1003 18:17:18.549259   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:18.549263   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:18.549304   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:18.573832   38063 cri.go:89] found id: ""
	I1003 18:17:18.573846   38063 logs.go:282] 0 containers: []
	W1003 18:17:18.573851   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:18.573855   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:18.573893   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:18.600015   38063 cri.go:89] found id: ""
	I1003 18:17:18.600030   38063 logs.go:282] 0 containers: []
	W1003 18:17:18.600038   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:18.600042   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:18.600092   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:18.624175   38063 cri.go:89] found id: ""
	I1003 18:17:18.624187   38063 logs.go:282] 0 containers: []
	W1003 18:17:18.624193   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:18.624197   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:18.624235   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:18.647489   38063 cri.go:89] found id: ""
	I1003 18:17:18.647506   38063 logs.go:282] 0 containers: []
	W1003 18:17:18.647515   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:18.647521   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:18.647563   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:18.671643   38063 cri.go:89] found id: ""
	I1003 18:17:18.671657   38063 logs.go:282] 0 containers: []
	W1003 18:17:18.671663   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:18.671668   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:18.671706   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:18.696078   38063 cri.go:89] found id: ""
	I1003 18:17:18.696092   38063 logs.go:282] 0 containers: []
	W1003 18:17:18.696098   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:18.696105   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:18.696121   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:18.753226   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:18.753245   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:18.780990   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:18.781068   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:18.847947   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:18.847966   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:18.859021   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:18.859037   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:18.912345   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:18.905516   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:18.906367   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:18.907929   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:18.908373   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:18.909849   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:18.905516   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:18.906367   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:18.907929   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:18.908373   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:18.909849   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:21.414030   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:21.425003   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:21.425051   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:21.450060   38063 cri.go:89] found id: ""
	I1003 18:17:21.450073   38063 logs.go:282] 0 containers: []
	W1003 18:17:21.450080   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:21.450085   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:21.450124   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:21.474474   38063 cri.go:89] found id: ""
	I1003 18:17:21.474488   38063 logs.go:282] 0 containers: []
	W1003 18:17:21.474494   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:21.474499   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:21.474539   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:21.498126   38063 cri.go:89] found id: ""
	I1003 18:17:21.498142   38063 logs.go:282] 0 containers: []
	W1003 18:17:21.498149   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:21.498154   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:21.498203   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:21.523905   38063 cri.go:89] found id: ""
	I1003 18:17:21.523923   38063 logs.go:282] 0 containers: []
	W1003 18:17:21.523932   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:21.523938   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:21.524008   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:21.548187   38063 cri.go:89] found id: ""
	I1003 18:17:21.548201   38063 logs.go:282] 0 containers: []
	W1003 18:17:21.548207   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:21.548211   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:21.548252   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:21.572667   38063 cri.go:89] found id: ""
	I1003 18:17:21.572680   38063 logs.go:282] 0 containers: []
	W1003 18:17:21.572686   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:21.572692   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:21.572736   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:21.597807   38063 cri.go:89] found id: ""
	I1003 18:17:21.597824   38063 logs.go:282] 0 containers: []
	W1003 18:17:21.597832   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:21.597839   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:21.597848   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:21.652152   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:21.645230   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:21.645729   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:21.647282   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:21.647701   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:21.649188   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:21.645230   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:21.645729   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:21.647282   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:21.647701   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:21.649188   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:21.652166   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:21.652179   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:21.713448   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:21.713465   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:21.742437   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:21.742451   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:21.805537   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:21.805554   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:24.317361   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:24.327608   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:24.327671   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:24.354286   38063 cri.go:89] found id: ""
	I1003 18:17:24.354305   38063 logs.go:282] 0 containers: []
	W1003 18:17:24.354315   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:24.354320   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:24.354379   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:24.378696   38063 cri.go:89] found id: ""
	I1003 18:17:24.378710   38063 logs.go:282] 0 containers: []
	W1003 18:17:24.378718   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:24.378724   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:24.378782   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:24.402575   38063 cri.go:89] found id: ""
	I1003 18:17:24.402589   38063 logs.go:282] 0 containers: []
	W1003 18:17:24.402595   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:24.402600   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:24.402648   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:24.427138   38063 cri.go:89] found id: ""
	I1003 18:17:24.427154   38063 logs.go:282] 0 containers: []
	W1003 18:17:24.427162   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:24.427169   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:24.427211   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:24.451521   38063 cri.go:89] found id: ""
	I1003 18:17:24.451536   38063 logs.go:282] 0 containers: []
	W1003 18:17:24.451543   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:24.451547   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:24.451590   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:24.475930   38063 cri.go:89] found id: ""
	I1003 18:17:24.475943   38063 logs.go:282] 0 containers: []
	W1003 18:17:24.475949   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:24.475954   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:24.476012   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:24.500074   38063 cri.go:89] found id: ""
	I1003 18:17:24.500087   38063 logs.go:282] 0 containers: []
	W1003 18:17:24.500093   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:24.500100   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:24.500109   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:24.566537   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:24.566553   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:24.577539   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:24.577553   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:24.632738   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:24.626123   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:24.626592   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:24.628151   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:24.628571   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:24.630095   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:24.626123   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:24.626592   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:24.628151   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:24.628571   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:24.630095   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:24.632749   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:24.632758   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:24.690610   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:24.690628   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:27.219340   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:27.229548   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:27.229602   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:27.253625   38063 cri.go:89] found id: ""
	I1003 18:17:27.253647   38063 logs.go:282] 0 containers: []
	W1003 18:17:27.253655   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:27.253661   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:27.253712   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:27.277732   38063 cri.go:89] found id: ""
	I1003 18:17:27.277747   38063 logs.go:282] 0 containers: []
	W1003 18:17:27.277756   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:27.277762   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:27.277804   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:27.301627   38063 cri.go:89] found id: ""
	I1003 18:17:27.301641   38063 logs.go:282] 0 containers: []
	W1003 18:17:27.301647   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:27.301652   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:27.301701   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:27.327361   38063 cri.go:89] found id: ""
	I1003 18:17:27.327377   38063 logs.go:282] 0 containers: []
	W1003 18:17:27.327386   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:27.327392   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:27.327455   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:27.351272   38063 cri.go:89] found id: ""
	I1003 18:17:27.351287   38063 logs.go:282] 0 containers: []
	W1003 18:17:27.351296   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:27.351301   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:27.351354   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:27.376015   38063 cri.go:89] found id: ""
	I1003 18:17:27.376028   38063 logs.go:282] 0 containers: []
	W1003 18:17:27.376034   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:27.376039   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:27.376078   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:27.401069   38063 cri.go:89] found id: ""
	I1003 18:17:27.401083   38063 logs.go:282] 0 containers: []
	W1003 18:17:27.401089   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:27.401096   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:27.401106   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:27.461887   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:27.461903   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:27.489794   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:27.489811   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:27.556416   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:27.556437   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:27.567650   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:27.567666   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:27.621254   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:27.614343   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:27.615016   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:27.616631   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:27.617100   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:27.618643   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:27.614343   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:27.615016   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:27.616631   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:27.617100   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:27.618643   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:30.121948   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:30.132195   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:30.132251   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:30.157028   38063 cri.go:89] found id: ""
	I1003 18:17:30.157044   38063 logs.go:282] 0 containers: []
	W1003 18:17:30.157052   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:30.157059   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:30.157114   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:30.181243   38063 cri.go:89] found id: ""
	I1003 18:17:30.181257   38063 logs.go:282] 0 containers: []
	W1003 18:17:30.181267   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:30.181272   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:30.181327   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:30.204956   38063 cri.go:89] found id: ""
	I1003 18:17:30.204969   38063 logs.go:282] 0 containers: []
	W1003 18:17:30.204990   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:30.204996   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:30.205049   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:30.229309   38063 cri.go:89] found id: ""
	I1003 18:17:30.229324   38063 logs.go:282] 0 containers: []
	W1003 18:17:30.229332   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:30.229353   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:30.229404   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:30.253288   38063 cri.go:89] found id: ""
	I1003 18:17:30.253302   38063 logs.go:282] 0 containers: []
	W1003 18:17:30.253308   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:30.253312   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:30.253353   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:30.276885   38063 cri.go:89] found id: ""
	I1003 18:17:30.276900   38063 logs.go:282] 0 containers: []
	W1003 18:17:30.276907   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:30.276912   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:30.276954   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:30.302076   38063 cri.go:89] found id: ""
	I1003 18:17:30.302093   38063 logs.go:282] 0 containers: []
	W1003 18:17:30.302102   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:30.302111   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:30.302122   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:30.355957   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:30.349507   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:30.350118   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:30.351635   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:30.351999   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:30.353476   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:30.349507   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:30.350118   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:30.351635   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:30.351999   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:30.353476   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:30.355967   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:30.355997   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:30.416595   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:30.416617   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:30.444417   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:30.444433   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:30.511869   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:30.511888   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:33.023698   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:33.034090   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:33.034130   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:33.058440   38063 cri.go:89] found id: ""
	I1003 18:17:33.058454   38063 logs.go:282] 0 containers: []
	W1003 18:17:33.058463   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:33.058469   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:33.058516   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:33.083214   38063 cri.go:89] found id: ""
	I1003 18:17:33.083227   38063 logs.go:282] 0 containers: []
	W1003 18:17:33.083233   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:33.083238   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:33.083278   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:33.107106   38063 cri.go:89] found id: ""
	I1003 18:17:33.107121   38063 logs.go:282] 0 containers: []
	W1003 18:17:33.107128   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:33.107132   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:33.107177   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:33.132152   38063 cri.go:89] found id: ""
	I1003 18:17:33.132169   38063 logs.go:282] 0 containers: []
	W1003 18:17:33.132178   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:33.132184   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:33.132237   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:33.156458   38063 cri.go:89] found id: ""
	I1003 18:17:33.156475   38063 logs.go:282] 0 containers: []
	W1003 18:17:33.156486   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:33.156492   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:33.156541   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:33.181450   38063 cri.go:89] found id: ""
	I1003 18:17:33.181466   38063 logs.go:282] 0 containers: []
	W1003 18:17:33.181474   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:33.181480   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:33.181520   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:33.204281   38063 cri.go:89] found id: ""
	I1003 18:17:33.204299   38063 logs.go:282] 0 containers: []
	W1003 18:17:33.204307   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:33.204316   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:33.204328   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:33.268843   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:33.268862   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:33.280428   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:33.280444   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:33.333875   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:33.327300   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:33.327741   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:33.329337   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:33.329778   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:33.331336   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:33.327300   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:33.327741   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:33.329337   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:33.329778   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:33.331336   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:33.333888   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:33.333899   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:33.395285   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:33.395303   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:35.924723   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:35.935417   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:35.935459   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:35.959423   38063 cri.go:89] found id: ""
	I1003 18:17:35.959437   38063 logs.go:282] 0 containers: []
	W1003 18:17:35.959444   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:35.959448   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:35.959497   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:35.984930   38063 cri.go:89] found id: ""
	I1003 18:17:35.984943   38063 logs.go:282] 0 containers: []
	W1003 18:17:35.984949   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:35.984953   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:35.985011   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:36.010660   38063 cri.go:89] found id: ""
	I1003 18:17:36.010676   38063 logs.go:282] 0 containers: []
	W1003 18:17:36.010685   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:36.010692   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:36.010750   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:36.036836   38063 cri.go:89] found id: ""
	I1003 18:17:36.036851   38063 logs.go:282] 0 containers: []
	W1003 18:17:36.036859   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:36.036865   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:36.036931   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:36.062748   38063 cri.go:89] found id: ""
	I1003 18:17:36.062764   38063 logs.go:282] 0 containers: []
	W1003 18:17:36.062774   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:36.062780   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:36.062832   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:36.088459   38063 cri.go:89] found id: ""
	I1003 18:17:36.088476   38063 logs.go:282] 0 containers: []
	W1003 18:17:36.088485   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:36.088492   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:36.088544   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:36.118150   38063 cri.go:89] found id: ""
	I1003 18:17:36.118166   38063 logs.go:282] 0 containers: []
	W1003 18:17:36.118174   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:36.118183   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:36.118195   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:36.188996   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:36.189016   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:36.201752   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:36.201774   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:36.259714   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:36.253085   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:36.253879   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:36.255461   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:36.255860   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:36.257025   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:36.253085   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:36.253879   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:36.255461   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:36.255860   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:36.257025   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:36.259724   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:36.259734   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:36.319327   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:36.319348   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:38.849084   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:38.860041   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:38.860087   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:38.885371   38063 cri.go:89] found id: ""
	I1003 18:17:38.885387   38063 logs.go:282] 0 containers: []
	W1003 18:17:38.885396   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:38.885403   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:38.885448   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:38.910420   38063 cri.go:89] found id: ""
	I1003 18:17:38.910433   38063 logs.go:282] 0 containers: []
	W1003 18:17:38.910439   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:38.910443   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:38.910492   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:38.935082   38063 cri.go:89] found id: ""
	I1003 18:17:38.935098   38063 logs.go:282] 0 containers: []
	W1003 18:17:38.935113   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:38.935119   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:38.935163   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:38.959589   38063 cri.go:89] found id: ""
	I1003 18:17:38.959605   38063 logs.go:282] 0 containers: []
	W1003 18:17:38.959614   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:38.959620   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:38.959664   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:38.983218   38063 cri.go:89] found id: ""
	I1003 18:17:38.983231   38063 logs.go:282] 0 containers: []
	W1003 18:17:38.983237   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:38.983241   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:38.983283   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:39.007734   38063 cri.go:89] found id: ""
	I1003 18:17:39.007748   38063 logs.go:282] 0 containers: []
	W1003 18:17:39.007754   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:39.007759   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:39.007803   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:39.032274   38063 cri.go:89] found id: ""
	I1003 18:17:39.032288   38063 logs.go:282] 0 containers: []
	W1003 18:17:39.032294   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:39.032301   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:39.032310   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:39.085898   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:39.079359   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:39.079847   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:39.081436   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:39.081830   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:39.083352   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:39.079359   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:39.079847   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:39.081436   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:39.081830   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:39.083352   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:39.085913   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:39.085926   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:39.147336   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:39.147355   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:39.174505   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:39.174520   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:39.236749   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:39.236770   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:41.751919   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:41.762279   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:41.762318   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:41.788348   38063 cri.go:89] found id: ""
	I1003 18:17:41.788364   38063 logs.go:282] 0 containers: []
	W1003 18:17:41.788370   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:41.788375   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:41.788416   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:41.813364   38063 cri.go:89] found id: ""
	I1003 18:17:41.813377   38063 logs.go:282] 0 containers: []
	W1003 18:17:41.813383   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:41.813387   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:41.813428   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:41.838263   38063 cri.go:89] found id: ""
	I1003 18:17:41.838278   38063 logs.go:282] 0 containers: []
	W1003 18:17:41.838286   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:41.838296   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:41.838342   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:41.863852   38063 cri.go:89] found id: ""
	I1003 18:17:41.863866   38063 logs.go:282] 0 containers: []
	W1003 18:17:41.863875   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:41.863880   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:41.863928   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:41.888046   38063 cri.go:89] found id: ""
	I1003 18:17:41.888059   38063 logs.go:282] 0 containers: []
	W1003 18:17:41.888065   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:41.888069   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:41.888123   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:41.912391   38063 cri.go:89] found id: ""
	I1003 18:17:41.912407   38063 logs.go:282] 0 containers: []
	W1003 18:17:41.912414   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:41.912419   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:41.912465   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:41.936635   38063 cri.go:89] found id: ""
	I1003 18:17:41.936652   38063 logs.go:282] 0 containers: []
	W1003 18:17:41.936667   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:41.936673   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:41.936682   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:41.999904   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:41.999923   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:42.010760   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:42.010774   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:42.063379   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:42.056776   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:42.057312   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:42.058864   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:42.059272   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:42.060765   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:42.056776   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:42.057312   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:42.058864   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:42.059272   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:42.060765   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:42.063391   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:42.063403   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:42.120707   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:42.120724   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:44.649184   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:44.659323   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:44.659383   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:44.684688   38063 cri.go:89] found id: ""
	I1003 18:17:44.684705   38063 logs.go:282] 0 containers: []
	W1003 18:17:44.684714   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:44.684720   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:44.684766   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:44.709094   38063 cri.go:89] found id: ""
	I1003 18:17:44.709107   38063 logs.go:282] 0 containers: []
	W1003 18:17:44.709113   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:44.709117   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:44.709155   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:44.733401   38063 cri.go:89] found id: ""
	I1003 18:17:44.733417   38063 logs.go:282] 0 containers: []
	W1003 18:17:44.733426   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:44.733430   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:44.733469   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:44.757753   38063 cri.go:89] found id: ""
	I1003 18:17:44.757772   38063 logs.go:282] 0 containers: []
	W1003 18:17:44.757780   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:44.757786   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:44.757841   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:44.781910   38063 cri.go:89] found id: ""
	I1003 18:17:44.781926   38063 logs.go:282] 0 containers: []
	W1003 18:17:44.781933   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:44.781939   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:44.781995   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:44.805801   38063 cri.go:89] found id: ""
	I1003 18:17:44.805820   38063 logs.go:282] 0 containers: []
	W1003 18:17:44.805829   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:44.805835   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:44.805882   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:44.830172   38063 cri.go:89] found id: ""
	I1003 18:17:44.830187   38063 logs.go:282] 0 containers: []
	W1003 18:17:44.830195   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:44.830204   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:44.830218   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:44.898633   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:44.898651   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:44.909788   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:44.909802   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:44.964112   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:44.957005   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:44.957997   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:44.959562   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:44.960003   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:44.961510   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:44.957005   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:44.957997   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:44.959562   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:44.960003   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:44.961510   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:44.964123   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:44.964137   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:45.022483   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:45.022503   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:47.552208   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:47.562597   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:47.562644   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:47.587653   38063 cri.go:89] found id: ""
	I1003 18:17:47.587666   38063 logs.go:282] 0 containers: []
	W1003 18:17:47.587672   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:47.587676   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:47.587722   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:47.611271   38063 cri.go:89] found id: ""
	I1003 18:17:47.611287   38063 logs.go:282] 0 containers: []
	W1003 18:17:47.611294   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:47.611298   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:47.611344   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:47.635604   38063 cri.go:89] found id: ""
	I1003 18:17:47.635617   38063 logs.go:282] 0 containers: []
	W1003 18:17:47.635625   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:47.635631   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:47.635704   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:47.660903   38063 cri.go:89] found id: ""
	I1003 18:17:47.660926   38063 logs.go:282] 0 containers: []
	W1003 18:17:47.660933   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:47.660938   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:47.661007   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:47.686109   38063 cri.go:89] found id: ""
	I1003 18:17:47.686122   38063 logs.go:282] 0 containers: []
	W1003 18:17:47.686129   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:47.686133   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:47.686172   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:47.710137   38063 cri.go:89] found id: ""
	I1003 18:17:47.710153   38063 logs.go:282] 0 containers: []
	W1003 18:17:47.710161   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:47.710167   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:47.710207   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:47.734797   38063 cri.go:89] found id: ""
	I1003 18:17:47.734817   38063 logs.go:282] 0 containers: []
	W1003 18:17:47.734826   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:47.734835   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:47.734849   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:47.745548   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:47.745565   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:47.799254   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:47.792392   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:47.793029   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:47.794533   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:47.794963   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:47.796403   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:47.792392   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:47.793029   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:47.794533   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:47.794963   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:47.796403   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:47.799265   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:47.799274   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:47.861703   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:47.861720   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:47.888938   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:47.888953   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:50.454766   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:50.465005   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:50.465050   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:50.489074   38063 cri.go:89] found id: ""
	I1003 18:17:50.489087   38063 logs.go:282] 0 containers: []
	W1003 18:17:50.489093   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:50.489098   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:50.489139   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:50.513935   38063 cri.go:89] found id: ""
	I1003 18:17:50.513950   38063 logs.go:282] 0 containers: []
	W1003 18:17:50.513959   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:50.513964   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:50.514027   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:50.539148   38063 cri.go:89] found id: ""
	I1003 18:17:50.539166   38063 logs.go:282] 0 containers: []
	W1003 18:17:50.539173   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:50.539179   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:50.539220   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:50.562923   38063 cri.go:89] found id: ""
	I1003 18:17:50.562944   38063 logs.go:282] 0 containers: []
	W1003 18:17:50.562950   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:50.562959   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:50.563021   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:50.587009   38063 cri.go:89] found id: ""
	I1003 18:17:50.587022   38063 logs.go:282] 0 containers: []
	W1003 18:17:50.587029   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:50.587033   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:50.587081   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:50.611334   38063 cri.go:89] found id: ""
	I1003 18:17:50.611350   38063 logs.go:282] 0 containers: []
	W1003 18:17:50.611356   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:50.611361   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:50.611410   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:50.634818   38063 cri.go:89] found id: ""
	I1003 18:17:50.634832   38063 logs.go:282] 0 containers: []
	W1003 18:17:50.634839   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:50.634846   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:50.634856   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:50.696044   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:50.696061   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:50.722679   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:50.722696   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:50.789104   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:50.789122   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:50.800113   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:50.800126   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:50.853877   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:50.846722   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:50.847312   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:50.848906   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:50.849353   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:50.851079   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:50.846722   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:50.847312   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:50.848906   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:50.849353   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:50.851079   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:53.354772   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:53.365080   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:53.365139   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:53.389900   38063 cri.go:89] found id: ""
	I1003 18:17:53.389913   38063 logs.go:282] 0 containers: []
	W1003 18:17:53.389920   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:53.389930   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:53.389993   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:53.414775   38063 cri.go:89] found id: ""
	I1003 18:17:53.414790   38063 logs.go:282] 0 containers: []
	W1003 18:17:53.414797   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:53.414801   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:53.414847   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:53.439429   38063 cri.go:89] found id: ""
	I1003 18:17:53.439445   38063 logs.go:282] 0 containers: []
	W1003 18:17:53.439454   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:53.439460   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:53.439506   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:53.464200   38063 cri.go:89] found id: ""
	I1003 18:17:53.464214   38063 logs.go:282] 0 containers: []
	W1003 18:17:53.464220   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:53.464225   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:53.464263   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:53.488529   38063 cri.go:89] found id: ""
	I1003 18:17:53.488542   38063 logs.go:282] 0 containers: []
	W1003 18:17:53.488550   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:53.488556   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:53.488612   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:53.512935   38063 cri.go:89] found id: ""
	I1003 18:17:53.512950   38063 logs.go:282] 0 containers: []
	W1003 18:17:53.512957   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:53.512962   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:53.513028   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:53.536738   38063 cri.go:89] found id: ""
	I1003 18:17:53.536754   38063 logs.go:282] 0 containers: []
	W1003 18:17:53.536763   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:53.536771   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:53.536784   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:53.602221   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:53.602237   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:53.613558   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:53.613573   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:53.667019   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:53.660222   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:53.660704   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:53.662310   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:53.662769   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:53.664227   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:53.660222   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:53.660704   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:53.662310   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:53.662769   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:53.664227   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:53.667029   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:53.667039   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:53.725461   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:53.725480   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:56.254692   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:56.264956   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:56.265017   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:56.289747   38063 cri.go:89] found id: ""
	I1003 18:17:56.289764   38063 logs.go:282] 0 containers: []
	W1003 18:17:56.289772   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:56.289779   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:56.289821   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:56.314478   38063 cri.go:89] found id: ""
	I1003 18:17:56.314493   38063 logs.go:282] 0 containers: []
	W1003 18:17:56.314501   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:56.314507   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:56.314557   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:56.338961   38063 cri.go:89] found id: ""
	I1003 18:17:56.338989   38063 logs.go:282] 0 containers: []
	W1003 18:17:56.338998   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:56.339004   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:56.339046   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:56.364770   38063 cri.go:89] found id: ""
	I1003 18:17:56.364784   38063 logs.go:282] 0 containers: []
	W1003 18:17:56.364789   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:56.364793   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:56.364832   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:56.391018   38063 cri.go:89] found id: ""
	I1003 18:17:56.391031   38063 logs.go:282] 0 containers: []
	W1003 18:17:56.391037   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:56.391041   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:56.391081   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:56.415373   38063 cri.go:89] found id: ""
	I1003 18:17:56.415389   38063 logs.go:282] 0 containers: []
	W1003 18:17:56.415398   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:56.415405   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:56.415447   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:56.439537   38063 cri.go:89] found id: ""
	I1003 18:17:56.439554   38063 logs.go:282] 0 containers: []
	W1003 18:17:56.439564   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:56.439572   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:56.439584   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:56.506236   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:56.506256   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:56.517260   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:56.517274   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:56.570626   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:56.564107   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:56.564604   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:56.566115   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:56.566514   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:56.568021   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:56.564107   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:56.564604   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:56.566115   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:56.566514   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:56.568021   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:56.570639   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:56.570658   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:56.633346   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:56.633369   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:59.161404   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:59.171988   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:59.172046   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:59.196437   38063 cri.go:89] found id: ""
	I1003 18:17:59.196449   38063 logs.go:282] 0 containers: []
	W1003 18:17:59.196455   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:59.196459   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:59.196498   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:59.220855   38063 cri.go:89] found id: ""
	I1003 18:17:59.220868   38063 logs.go:282] 0 containers: []
	W1003 18:17:59.220874   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:59.220878   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:59.220926   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:59.246564   38063 cri.go:89] found id: ""
	I1003 18:17:59.246579   38063 logs.go:282] 0 containers: []
	W1003 18:17:59.246587   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:59.246595   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:59.246655   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:59.271407   38063 cri.go:89] found id: ""
	I1003 18:17:59.271422   38063 logs.go:282] 0 containers: []
	W1003 18:17:59.271428   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:59.271433   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:59.271474   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:59.295265   38063 cri.go:89] found id: ""
	I1003 18:17:59.295281   38063 logs.go:282] 0 containers: []
	W1003 18:17:59.295290   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:59.295297   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:59.295344   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:59.319819   38063 cri.go:89] found id: ""
	I1003 18:17:59.319835   38063 logs.go:282] 0 containers: []
	W1003 18:17:59.319849   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:59.319853   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:59.319893   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:59.344045   38063 cri.go:89] found id: ""
	I1003 18:17:59.344058   38063 logs.go:282] 0 containers: []
	W1003 18:17:59.344064   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:59.344071   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:59.344080   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:59.411448   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:59.411465   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:59.422319   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:59.422332   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:59.475228   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:59.468454   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:59.468914   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:59.470455   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:59.470862   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:59.472347   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:59.468454   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:59.468914   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:59.470455   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:59.470862   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:59.472347   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:59.475255   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:59.475270   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:59.536088   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:59.536106   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:02.065737   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:02.076173   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:02.076214   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:02.101478   38063 cri.go:89] found id: ""
	I1003 18:18:02.101495   38063 logs.go:282] 0 containers: []
	W1003 18:18:02.101505   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:02.101513   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:02.101556   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:02.126528   38063 cri.go:89] found id: ""
	I1003 18:18:02.126541   38063 logs.go:282] 0 containers: []
	W1003 18:18:02.126547   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:02.126551   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:02.126591   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:02.150958   38063 cri.go:89] found id: ""
	I1003 18:18:02.150971   38063 logs.go:282] 0 containers: []
	W1003 18:18:02.150997   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:02.151003   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:02.151051   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:02.176464   38063 cri.go:89] found id: ""
	I1003 18:18:02.176478   38063 logs.go:282] 0 containers: []
	W1003 18:18:02.176485   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:02.176497   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:02.176539   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:02.201345   38063 cri.go:89] found id: ""
	I1003 18:18:02.201361   38063 logs.go:282] 0 containers: []
	W1003 18:18:02.201368   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:02.201373   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:02.201415   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:02.227338   38063 cri.go:89] found id: ""
	I1003 18:18:02.227352   38063 logs.go:282] 0 containers: []
	W1003 18:18:02.227359   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:02.227363   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:02.227407   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:02.253859   38063 cri.go:89] found id: ""
	I1003 18:18:02.253875   38063 logs.go:282] 0 containers: []
	W1003 18:18:02.253882   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:02.253890   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:02.253902   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:02.314960   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:02.314986   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:02.343587   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:02.343605   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:02.412159   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:02.412178   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:02.423525   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:02.423542   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:02.480478   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:02.473940   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:02.474565   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:02.476146   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:02.476539   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:02.477814   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:02.473940   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:02.474565   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:02.476146   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:02.476539   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:02.477814   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:04.981110   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:04.992430   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:04.992470   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:05.019218   38063 cri.go:89] found id: ""
	I1003 18:18:05.019232   38063 logs.go:282] 0 containers: []
	W1003 18:18:05.019238   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:05.019243   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:05.019282   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:05.042823   38063 cri.go:89] found id: ""
	I1003 18:18:05.042836   38063 logs.go:282] 0 containers: []
	W1003 18:18:05.042841   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:05.042845   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:05.042902   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:05.069124   38063 cri.go:89] found id: ""
	I1003 18:18:05.069141   38063 logs.go:282] 0 containers: []
	W1003 18:18:05.069148   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:05.069152   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:05.069196   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:05.093833   38063 cri.go:89] found id: ""
	I1003 18:18:05.093848   38063 logs.go:282] 0 containers: []
	W1003 18:18:05.093856   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:05.093862   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:05.093932   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:05.119454   38063 cri.go:89] found id: ""
	I1003 18:18:05.119468   38063 logs.go:282] 0 containers: []
	W1003 18:18:05.119475   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:05.119479   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:05.119523   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:05.143897   38063 cri.go:89] found id: ""
	I1003 18:18:05.143914   38063 logs.go:282] 0 containers: []
	W1003 18:18:05.143920   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:05.143925   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:05.143966   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:05.167637   38063 cri.go:89] found id: ""
	I1003 18:18:05.167650   38063 logs.go:282] 0 containers: []
	W1003 18:18:05.167656   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:05.167663   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:05.167674   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:05.195697   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:05.195715   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:05.260408   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:05.260428   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:05.271292   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:05.271309   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:05.324867   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:05.318440   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:05.318912   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:05.320332   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:05.320733   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:05.322261   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:05.318440   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:05.318912   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:05.320332   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:05.320733   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:05.322261   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:05.324886   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:05.324898   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:07.885833   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:07.895849   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:07.895957   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:07.921467   38063 cri.go:89] found id: ""
	I1003 18:18:07.921479   38063 logs.go:282] 0 containers: []
	W1003 18:18:07.921485   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:07.921490   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:07.921545   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:07.945467   38063 cri.go:89] found id: ""
	I1003 18:18:07.945480   38063 logs.go:282] 0 containers: []
	W1003 18:18:07.945487   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:07.945492   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:07.945539   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:07.970084   38063 cri.go:89] found id: ""
	I1003 18:18:07.970098   38063 logs.go:282] 0 containers: []
	W1003 18:18:07.970105   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:07.970110   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:07.970148   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:07.994263   38063 cri.go:89] found id: ""
	I1003 18:18:07.994278   38063 logs.go:282] 0 containers: []
	W1003 18:18:07.994287   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:07.994293   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:07.994334   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:08.018778   38063 cri.go:89] found id: ""
	I1003 18:18:08.018793   38063 logs.go:282] 0 containers: []
	W1003 18:18:08.018800   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:08.018805   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:08.018844   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:08.043138   38063 cri.go:89] found id: ""
	I1003 18:18:08.043153   38063 logs.go:282] 0 containers: []
	W1003 18:18:08.043159   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:08.043164   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:08.043203   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:08.067785   38063 cri.go:89] found id: ""
	I1003 18:18:08.067799   38063 logs.go:282] 0 containers: []
	W1003 18:18:08.067805   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:08.067811   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:08.067820   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:08.136408   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:08.136429   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:08.147427   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:08.147445   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:08.201110   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:08.194693   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:08.195161   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:08.196715   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:08.197135   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:08.198610   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:08.194693   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:08.195161   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:08.196715   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:08.197135   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:08.198610   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:08.201124   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:08.201135   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:08.261991   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:08.262010   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:10.791196   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:10.801467   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:10.801525   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:10.827655   38063 cri.go:89] found id: ""
	I1003 18:18:10.827672   38063 logs.go:282] 0 containers: []
	W1003 18:18:10.827683   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:10.827688   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:10.827735   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:10.852558   38063 cri.go:89] found id: ""
	I1003 18:18:10.852574   38063 logs.go:282] 0 containers: []
	W1003 18:18:10.852582   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:10.852588   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:10.852638   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:10.876842   38063 cri.go:89] found id: ""
	I1003 18:18:10.876858   38063 logs.go:282] 0 containers: []
	W1003 18:18:10.876870   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:10.876874   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:10.876918   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:10.902827   38063 cri.go:89] found id: ""
	I1003 18:18:10.902840   38063 logs.go:282] 0 containers: []
	W1003 18:18:10.902846   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:10.902851   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:10.902890   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:10.927840   38063 cri.go:89] found id: ""
	I1003 18:18:10.927855   38063 logs.go:282] 0 containers: []
	W1003 18:18:10.927861   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:10.927865   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:10.927909   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:10.952535   38063 cri.go:89] found id: ""
	I1003 18:18:10.952550   38063 logs.go:282] 0 containers: []
	W1003 18:18:10.952556   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:10.952561   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:10.952602   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:10.976585   38063 cri.go:89] found id: ""
	I1003 18:18:10.976601   38063 logs.go:282] 0 containers: []
	W1003 18:18:10.976610   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:10.976620   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:10.976631   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:10.987359   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:10.987373   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:11.041048   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:11.034604   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:11.035105   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:11.036603   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:11.036989   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:11.038508   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:11.034604   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:11.035105   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:11.036603   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:11.036989   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:11.038508   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:11.041058   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:11.041068   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:11.101637   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:11.101658   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:11.128867   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:11.128885   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:13.697689   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:13.708864   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:13.708949   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:13.733837   38063 cri.go:89] found id: ""
	I1003 18:18:13.733851   38063 logs.go:282] 0 containers: []
	W1003 18:18:13.733857   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:13.733864   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:13.733915   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:13.757681   38063 cri.go:89] found id: ""
	I1003 18:18:13.757698   38063 logs.go:282] 0 containers: []
	W1003 18:18:13.757707   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:13.757713   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:13.757778   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:13.782545   38063 cri.go:89] found id: ""
	I1003 18:18:13.782560   38063 logs.go:282] 0 containers: []
	W1003 18:18:13.782572   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:13.782576   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:13.782624   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:13.806939   38063 cri.go:89] found id: ""
	I1003 18:18:13.806955   38063 logs.go:282] 0 containers: []
	W1003 18:18:13.806964   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:13.806970   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:13.807041   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:13.831768   38063 cri.go:89] found id: ""
	I1003 18:18:13.831783   38063 logs.go:282] 0 containers: []
	W1003 18:18:13.831790   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:13.831795   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:13.831837   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:13.856076   38063 cri.go:89] found id: ""
	I1003 18:18:13.856093   38063 logs.go:282] 0 containers: []
	W1003 18:18:13.856101   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:13.856107   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:13.856163   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:13.879410   38063 cri.go:89] found id: ""
	I1003 18:18:13.879423   38063 logs.go:282] 0 containers: []
	W1003 18:18:13.879430   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:13.879438   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:13.879450   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:13.944708   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:13.944727   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:13.956175   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:13.956194   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:14.010487   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:14.003834   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:14.004418   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:14.005911   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:14.006368   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:14.007894   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:14.003834   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:14.004418   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:14.005911   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:14.006368   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:14.007894   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:14.010499   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:14.010514   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:14.071892   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:14.071911   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:16.601878   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:16.612139   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:16.612183   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:16.635115   38063 cri.go:89] found id: ""
	I1003 18:18:16.635128   38063 logs.go:282] 0 containers: []
	W1003 18:18:16.635134   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:16.635139   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:16.635180   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:16.660332   38063 cri.go:89] found id: ""
	I1003 18:18:16.660347   38063 logs.go:282] 0 containers: []
	W1003 18:18:16.660354   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:16.660361   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:16.660416   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:16.683528   38063 cri.go:89] found id: ""
	I1003 18:18:16.683551   38063 logs.go:282] 0 containers: []
	W1003 18:18:16.683560   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:16.683566   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:16.683619   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:16.708287   38063 cri.go:89] found id: ""
	I1003 18:18:16.708304   38063 logs.go:282] 0 containers: []
	W1003 18:18:16.708313   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:16.708319   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:16.708368   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:16.732627   38063 cri.go:89] found id: ""
	I1003 18:18:16.732642   38063 logs.go:282] 0 containers: []
	W1003 18:18:16.732651   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:16.732670   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:16.732712   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:16.757768   38063 cri.go:89] found id: ""
	I1003 18:18:16.757782   38063 logs.go:282] 0 containers: []
	W1003 18:18:16.757788   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:16.757793   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:16.757836   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:16.781970   38063 cri.go:89] found id: ""
	I1003 18:18:16.781997   38063 logs.go:282] 0 containers: []
	W1003 18:18:16.782011   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:16.782020   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:16.782036   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:16.850796   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:16.850813   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:16.862129   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:16.862143   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:16.915039   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:16.908470   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:16.908860   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:16.910345   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:16.910711   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:16.912263   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:16.908470   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:16.908860   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:16.910345   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:16.910711   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:16.912263   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:16.915050   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:16.915063   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:16.972388   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:16.972405   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:19.502094   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:19.512481   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:19.512541   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:19.537212   38063 cri.go:89] found id: ""
	I1003 18:18:19.537228   38063 logs.go:282] 0 containers: []
	W1003 18:18:19.537236   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:19.537242   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:19.537305   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:19.561717   38063 cri.go:89] found id: ""
	I1003 18:18:19.561734   38063 logs.go:282] 0 containers: []
	W1003 18:18:19.561741   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:19.561746   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:19.561793   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:19.585423   38063 cri.go:89] found id: ""
	I1003 18:18:19.585436   38063 logs.go:282] 0 containers: []
	W1003 18:18:19.585443   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:19.585447   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:19.585490   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:19.609708   38063 cri.go:89] found id: ""
	I1003 18:18:19.609722   38063 logs.go:282] 0 containers: []
	W1003 18:18:19.609728   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:19.609733   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:19.609772   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:19.632853   38063 cri.go:89] found id: ""
	I1003 18:18:19.632869   38063 logs.go:282] 0 containers: []
	W1003 18:18:19.632878   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:19.632884   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:19.632933   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:19.656204   38063 cri.go:89] found id: ""
	I1003 18:18:19.656220   38063 logs.go:282] 0 containers: []
	W1003 18:18:19.656228   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:19.656235   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:19.656287   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:19.680640   38063 cri.go:89] found id: ""
	I1003 18:18:19.680663   38063 logs.go:282] 0 containers: []
	W1003 18:18:19.680669   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:19.680677   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:19.680689   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:19.707259   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:19.707275   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:19.774362   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:19.774380   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:19.785563   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:19.785577   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:19.839901   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:19.833112   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:19.833732   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:19.835306   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:19.835682   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:19.837164   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:19.833112   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:19.833732   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:19.835306   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:19.835682   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:19.837164   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:19.839911   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:19.839921   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:22.400537   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:22.410712   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:22.410758   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:22.434956   38063 cri.go:89] found id: ""
	I1003 18:18:22.434970   38063 logs.go:282] 0 containers: []
	W1003 18:18:22.434988   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:22.434995   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:22.435050   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:22.459920   38063 cri.go:89] found id: ""
	I1003 18:18:22.459936   38063 logs.go:282] 0 containers: []
	W1003 18:18:22.459945   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:22.459950   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:22.460011   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:22.484807   38063 cri.go:89] found id: ""
	I1003 18:18:22.484821   38063 logs.go:282] 0 containers: []
	W1003 18:18:22.484827   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:22.484832   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:22.484876   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:22.510038   38063 cri.go:89] found id: ""
	I1003 18:18:22.510055   38063 logs.go:282] 0 containers: []
	W1003 18:18:22.510063   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:22.510069   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:22.510127   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:22.534586   38063 cri.go:89] found id: ""
	I1003 18:18:22.534606   38063 logs.go:282] 0 containers: []
	W1003 18:18:22.534616   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:22.534622   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:22.534684   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:22.559759   38063 cri.go:89] found id: ""
	I1003 18:18:22.559776   38063 logs.go:282] 0 containers: []
	W1003 18:18:22.559785   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:22.559791   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:22.559847   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:22.584554   38063 cri.go:89] found id: ""
	I1003 18:18:22.584569   38063 logs.go:282] 0 containers: []
	W1003 18:18:22.584579   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:22.584588   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:22.584602   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:22.653550   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:22.653568   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:22.664744   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:22.664760   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:22.718670   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:22.712190   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:22.712660   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:22.714209   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:22.714609   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:22.716119   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:22.712190   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:22.712660   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:22.714209   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:22.714609   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:22.716119   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:22.718679   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:22.718689   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:22.781634   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:22.781662   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:25.311342   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:25.321538   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:25.321589   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:25.347212   38063 cri.go:89] found id: ""
	I1003 18:18:25.347228   38063 logs.go:282] 0 containers: []
	W1003 18:18:25.347237   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:25.347244   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:25.347288   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:25.373240   38063 cri.go:89] found id: ""
	I1003 18:18:25.373255   38063 logs.go:282] 0 containers: []
	W1003 18:18:25.373261   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:25.373265   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:25.373316   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:25.398262   38063 cri.go:89] found id: ""
	I1003 18:18:25.398280   38063 logs.go:282] 0 containers: []
	W1003 18:18:25.398287   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:25.398293   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:25.398340   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:25.423522   38063 cri.go:89] found id: ""
	I1003 18:18:25.423536   38063 logs.go:282] 0 containers: []
	W1003 18:18:25.423544   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:25.423550   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:25.423609   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:25.448232   38063 cri.go:89] found id: ""
	I1003 18:18:25.448249   38063 logs.go:282] 0 containers: []
	W1003 18:18:25.448258   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:25.448264   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:25.448311   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:25.474690   38063 cri.go:89] found id: ""
	I1003 18:18:25.474704   38063 logs.go:282] 0 containers: []
	W1003 18:18:25.474710   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:25.474716   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:25.474766   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:25.499693   38063 cri.go:89] found id: ""
	I1003 18:18:25.499707   38063 logs.go:282] 0 containers: []
	W1003 18:18:25.499715   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:25.499723   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:25.499733   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:25.526210   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:25.526225   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:25.595354   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:25.595373   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:25.606969   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:25.606998   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:25.662186   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:25.655368   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:25.655970   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:25.657492   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:25.657931   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:25.659386   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:25.655368   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:25.655970   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:25.657492   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:25.657931   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:25.659386   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:25.662197   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:25.662206   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:28.226017   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:28.237132   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:28.237175   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:28.262449   38063 cri.go:89] found id: ""
	I1003 18:18:28.262466   38063 logs.go:282] 0 containers: []
	W1003 18:18:28.262474   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:28.262479   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:28.262524   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:28.287653   38063 cri.go:89] found id: ""
	I1003 18:18:28.287669   38063 logs.go:282] 0 containers: []
	W1003 18:18:28.287679   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:28.287685   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:28.287730   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:28.313255   38063 cri.go:89] found id: ""
	I1003 18:18:28.313269   38063 logs.go:282] 0 containers: []
	W1003 18:18:28.313276   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:28.313280   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:28.313321   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:28.338727   38063 cri.go:89] found id: ""
	I1003 18:18:28.338742   38063 logs.go:282] 0 containers: []
	W1003 18:18:28.338748   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:28.338752   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:28.338793   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:28.363285   38063 cri.go:89] found id: ""
	I1003 18:18:28.363303   38063 logs.go:282] 0 containers: []
	W1003 18:18:28.363312   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:28.363317   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:28.363359   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:28.388945   38063 cri.go:89] found id: ""
	I1003 18:18:28.388958   38063 logs.go:282] 0 containers: []
	W1003 18:18:28.388964   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:28.388969   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:28.389039   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:28.414591   38063 cri.go:89] found id: ""
	I1003 18:18:28.414607   38063 logs.go:282] 0 containers: []
	W1003 18:18:28.414614   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:28.414621   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:28.414630   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:28.425367   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:28.425382   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:28.479472   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:28.472065   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:28.472604   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:28.474900   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:28.475366   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:28.476874   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:28.472065   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:28.472604   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:28.474900   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:28.475366   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:28.476874   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:28.479481   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:28.479491   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:28.538844   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:28.538865   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:28.567294   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:28.567309   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:31.138009   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:31.148430   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:31.148480   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:31.173355   38063 cri.go:89] found id: ""
	I1003 18:18:31.173368   38063 logs.go:282] 0 containers: []
	W1003 18:18:31.173375   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:31.173380   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:31.173418   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:31.198151   38063 cri.go:89] found id: ""
	I1003 18:18:31.198166   38063 logs.go:282] 0 containers: []
	W1003 18:18:31.198181   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:31.198187   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:31.198231   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:31.223275   38063 cri.go:89] found id: ""
	I1003 18:18:31.223290   38063 logs.go:282] 0 containers: []
	W1003 18:18:31.223296   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:31.223300   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:31.223343   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:31.247221   38063 cri.go:89] found id: ""
	I1003 18:18:31.247237   38063 logs.go:282] 0 containers: []
	W1003 18:18:31.247248   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:31.247253   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:31.247310   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:31.270563   38063 cri.go:89] found id: ""
	I1003 18:18:31.270576   38063 logs.go:282] 0 containers: []
	W1003 18:18:31.270582   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:31.270586   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:31.270636   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:31.295134   38063 cri.go:89] found id: ""
	I1003 18:18:31.295150   38063 logs.go:282] 0 containers: []
	W1003 18:18:31.295159   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:31.295165   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:31.295204   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:31.319654   38063 cri.go:89] found id: ""
	I1003 18:18:31.319668   38063 logs.go:282] 0 containers: []
	W1003 18:18:31.319675   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:31.319683   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:31.319698   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:31.386428   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:31.386448   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:31.397662   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:31.397677   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:31.451288   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:31.444650   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:31.445190   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:31.446750   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:31.447199   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:31.448658   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:31.444650   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:31.445190   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:31.446750   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:31.447199   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:31.448658   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:31.451299   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:31.451309   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:31.510468   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:31.510487   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:34.039627   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:34.050185   38063 kubeadm.go:601] duration metric: took 4m1.950557888s to restartPrimaryControlPlane
	W1003 18:18:34.050251   38063 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1003 18:18:34.050324   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1003 18:18:34.501082   38063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:18:34.513430   38063 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 18:18:34.521102   38063 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:18:34.521139   38063 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:18:34.528531   38063 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:18:34.528540   38063 kubeadm.go:157] found existing configuration files:
	
	I1003 18:18:34.528574   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1003 18:18:34.535908   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:18:34.535967   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:18:34.543072   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1003 18:18:34.550220   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:18:34.550263   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:18:34.557251   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1003 18:18:34.565090   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:18:34.565130   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:18:34.571882   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1003 18:18:34.579174   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:18:34.579210   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:18:34.585996   38063 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:18:34.620715   38063 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:18:34.620773   38063 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:18:34.639243   38063 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:18:34.639317   38063 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 18:18:34.639360   38063 kubeadm.go:318] OS: Linux
	I1003 18:18:34.639397   38063 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:18:34.639466   38063 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:18:34.639529   38063 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:18:34.639587   38063 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:18:34.639687   38063 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:18:34.639749   38063 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:18:34.639803   38063 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:18:34.639863   38063 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 18:18:34.692781   38063 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:18:34.692898   38063 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:18:34.693025   38063 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:18:34.699300   38063 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:18:34.703358   38063 out.go:252]   - Generating certificates and keys ...
	I1003 18:18:34.703438   38063 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:18:34.703491   38063 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:18:34.703553   38063 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1003 18:18:34.703602   38063 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1003 18:18:34.703664   38063 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1003 18:18:34.703733   38063 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1003 18:18:34.703790   38063 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1003 18:18:34.703840   38063 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1003 18:18:34.703900   38063 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1003 18:18:34.703962   38063 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1003 18:18:34.704000   38063 kubeadm.go:318] [certs] Using the existing "sa" key
	I1003 18:18:34.704043   38063 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:18:34.953422   38063 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:18:35.214353   38063 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:18:35.447415   38063 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:18:35.645347   38063 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:18:36.220332   38063 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:18:36.220714   38063 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:18:36.222788   38063 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:18:36.225372   38063 out.go:252]   - Booting up control plane ...
	I1003 18:18:36.225492   38063 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:18:36.225605   38063 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:18:36.225672   38063 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:18:36.237955   38063 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:18:36.238117   38063 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:18:36.244390   38063 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:18:36.244573   38063 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:18:36.244608   38063 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:18:36.339701   38063 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:18:36.339860   38063 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:18:36.841336   38063 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.785786ms
	I1003 18:18:36.845100   38063 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:18:36.845207   38063 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1003 18:18:36.845308   38063 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:18:36.845418   38063 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:22:36.846410   38063 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001254073s
	I1003 18:22:36.846572   38063 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001316832s
	I1003 18:22:36.846680   38063 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00135784s
	I1003 18:22:36.846684   38063 kubeadm.go:318] 
	I1003 18:22:36.846803   38063 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 18:22:36.846887   38063 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:22:36.847019   38063 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 18:22:36.847152   38063 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 18:22:36.847221   38063 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 18:22:36.847290   38063 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 18:22:36.847293   38063 kubeadm.go:318] 
	I1003 18:22:36.850267   38063 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 18:22:36.850420   38063 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:22:36.851109   38063 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1003 18:22:36.851222   38063 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1003 18:22:36.851310   38063 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.785786ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001254073s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001316832s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00135784s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1003 18:22:36.851378   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1003 18:22:37.292774   38063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:22:37.305190   38063 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:22:37.305239   38063 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:22:37.312706   38063 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:22:37.312714   38063 kubeadm.go:157] found existing configuration files:
	
	I1003 18:22:37.312747   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1003 18:22:37.319873   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:22:37.319914   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:22:37.326628   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1003 18:22:37.333616   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:22:37.333654   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:22:37.340503   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1003 18:22:37.347489   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:22:37.347533   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:22:37.354448   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1003 18:22:37.361615   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:22:37.361649   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:22:37.368313   38063 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:22:37.421185   38063 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 18:22:37.475455   38063 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:26:40.291288   38063 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1003 18:26:40.291385   38063 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1003 18:26:40.294089   38063 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:26:40.294149   38063 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:26:40.294247   38063 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:26:40.294331   38063 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 18:26:40.294363   38063 kubeadm.go:318] OS: Linux
	I1003 18:26:40.294399   38063 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:26:40.294467   38063 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:26:40.294515   38063 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:26:40.294554   38063 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:26:40.294601   38063 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:26:40.294658   38063 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:26:40.294706   38063 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:26:40.294741   38063 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 18:26:40.294849   38063 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:26:40.294960   38063 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:26:40.295057   38063 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:26:40.295109   38063 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:26:40.297835   38063 out.go:252]   - Generating certificates and keys ...
	I1003 18:26:40.297914   38063 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:26:40.297990   38063 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:26:40.298082   38063 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1003 18:26:40.298152   38063 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1003 18:26:40.298217   38063 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1003 18:26:40.298275   38063 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1003 18:26:40.298326   38063 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1003 18:26:40.298376   38063 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1003 18:26:40.298444   38063 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1003 18:26:40.298519   38063 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1003 18:26:40.298554   38063 kubeadm.go:318] [certs] Using the existing "sa" key
	I1003 18:26:40.298605   38063 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:26:40.298646   38063 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:26:40.298698   38063 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:26:40.298740   38063 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:26:40.298791   38063 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:26:40.298839   38063 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:26:40.298907   38063 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:26:40.298998   38063 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:26:40.300468   38063 out.go:252]   - Booting up control plane ...
	I1003 18:26:40.300542   38063 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:26:40.300632   38063 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:26:40.300695   38063 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:26:40.300779   38063 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:26:40.300871   38063 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:26:40.300963   38063 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:26:40.301061   38063 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:26:40.301100   38063 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:26:40.301207   38063 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:26:40.301294   38063 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:26:40.301341   38063 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500810972s
	I1003 18:26:40.301415   38063 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:26:40.301479   38063 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1003 18:26:40.301550   38063 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:26:40.301629   38063 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:26:40.301688   38063 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001083242s
	I1003 18:26:40.301753   38063 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001112366s
	I1003 18:26:40.301845   38063 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001257154s
	I1003 18:26:40.301849   38063 kubeadm.go:318] 
	I1003 18:26:40.301925   38063 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 18:26:40.302009   38063 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:26:40.302080   38063 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 18:26:40.302157   38063 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 18:26:40.302217   38063 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 18:26:40.302288   38063 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 18:26:40.302308   38063 kubeadm.go:318] 
	I1003 18:26:40.302352   38063 kubeadm.go:402] duration metric: took 12m8.237590419s to StartCluster
	I1003 18:26:40.302401   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:26:40.302450   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:26:40.329135   38063 cri.go:89] found id: ""
	I1003 18:26:40.329148   38063 logs.go:282] 0 containers: []
	W1003 18:26:40.329154   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:26:40.329160   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:26:40.329203   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:26:40.354340   38063 cri.go:89] found id: ""
	I1003 18:26:40.354354   38063 logs.go:282] 0 containers: []
	W1003 18:26:40.354361   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:26:40.354366   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:26:40.354419   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:26:40.380556   38063 cri.go:89] found id: ""
	I1003 18:26:40.380570   38063 logs.go:282] 0 containers: []
	W1003 18:26:40.380576   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:26:40.380581   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:26:40.380640   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:26:40.406655   38063 cri.go:89] found id: ""
	I1003 18:26:40.406670   38063 logs.go:282] 0 containers: []
	W1003 18:26:40.406677   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:26:40.406683   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:26:40.406728   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:26:40.432698   38063 cri.go:89] found id: ""
	I1003 18:26:40.432713   38063 logs.go:282] 0 containers: []
	W1003 18:26:40.432720   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:26:40.432725   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:26:40.432769   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:26:40.459363   38063 cri.go:89] found id: ""
	I1003 18:26:40.459378   38063 logs.go:282] 0 containers: []
	W1003 18:26:40.459384   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:26:40.459390   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:26:40.459437   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:26:40.484951   38063 cri.go:89] found id: ""
	I1003 18:26:40.484964   38063 logs.go:282] 0 containers: []
	W1003 18:26:40.484971   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:26:40.484997   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:26:40.485019   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:26:40.549245   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:26:40.549263   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:26:40.560727   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:26:40.560741   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:26:40.616474   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:26:40.609386   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:40.610009   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:40.611564   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:40.611939   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:40.613451   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:26:40.609386   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:40.610009   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:40.611564   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:40.611939   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:40.613451   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:26:40.616500   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:26:40.616509   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:26:40.676470   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:26:40.676488   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1003 18:26:40.704576   38063 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500810972s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001083242s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001112366s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001257154s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1003 18:26:40.704638   38063 out.go:285] * 
	W1003 18:26:40.704701   38063 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500810972s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001083242s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001112366s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001257154s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:26:40.704715   38063 out.go:285] * 
	W1003 18:26:40.706538   38063 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:26:40.710390   38063 out.go:203] 
	W1003 18:26:40.711880   38063 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500810972s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001083242s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001112366s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001257154s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:26:40.711903   38063 out.go:285] * 
	I1003 18:26:40.714182   38063 out.go:203] 
	
	
	==> CRI-O <==
	Oct 03 18:26:34 functional-889240 crio[5881]: time="2025-10-03T18:26:34.948118628Z" level=info msg="createCtr: removing container 4a0da56a80b0bf9cf042a1ed29d0e9a46f1bcc83feb34f5c75fb117227f399ca" id=b2becadb-533e-4a54-9579-2ace7aeb4dbb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:34 functional-889240 crio[5881]: time="2025-10-03T18:26:34.948150012Z" level=info msg="createCtr: deleting container 4a0da56a80b0bf9cf042a1ed29d0e9a46f1bcc83feb34f5c75fb117227f399ca from storage" id=b2becadb-533e-4a54-9579-2ace7aeb4dbb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:34 functional-889240 crio[5881]: time="2025-10-03T18:26:34.950407562Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-889240_kube-system_7e715cb6024854d45a9fa99576167e43_0" id=b2becadb-533e-4a54-9579-2ace7aeb4dbb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:36 functional-889240 crio[5881]: time="2025-10-03T18:26:36.924698487Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=99144eec-10ba-48ad-9ef7-71167b1dc31a name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:36 functional-889240 crio[5881]: time="2025-10-03T18:26:36.925531495Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=d51ec181-c49a-48b5-b411-5c6c9b8cf406 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:36 functional-889240 crio[5881]: time="2025-10-03T18:26:36.926349562Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-889240/kube-apiserver" id=00fa20ab-4f0b-4c2e-8277-c8e0a21c8a69 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:36 functional-889240 crio[5881]: time="2025-10-03T18:26:36.926567549Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:26:36 functional-889240 crio[5881]: time="2025-10-03T18:26:36.929801236Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:26:36 functional-889240 crio[5881]: time="2025-10-03T18:26:36.930188171Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:26:36 functional-889240 crio[5881]: time="2025-10-03T18:26:36.944674069Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=00fa20ab-4f0b-4c2e-8277-c8e0a21c8a69 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:36 functional-889240 crio[5881]: time="2025-10-03T18:26:36.946023106Z" level=info msg="createCtr: deleting container ID 048a600cce13059b112019687fce28edbb01a74d78512f8f553ecbd9dafecbc2 from idIndex" id=00fa20ab-4f0b-4c2e-8277-c8e0a21c8a69 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:36 functional-889240 crio[5881]: time="2025-10-03T18:26:36.946054105Z" level=info msg="createCtr: removing container 048a600cce13059b112019687fce28edbb01a74d78512f8f553ecbd9dafecbc2" id=00fa20ab-4f0b-4c2e-8277-c8e0a21c8a69 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:36 functional-889240 crio[5881]: time="2025-10-03T18:26:36.946089326Z" level=info msg="createCtr: deleting container 048a600cce13059b112019687fce28edbb01a74d78512f8f553ecbd9dafecbc2 from storage" id=00fa20ab-4f0b-4c2e-8277-c8e0a21c8a69 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:36 functional-889240 crio[5881]: time="2025-10-03T18:26:36.948138665Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-889240_kube-system_9d9b7aefd7427246dd018814b6979298_0" id=00fa20ab-4f0b-4c2e-8277-c8e0a21c8a69 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:40 functional-889240 crio[5881]: time="2025-10-03T18:26:40.925002598Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=8286f6a4-e4d1-4e99-99c6-d455c86c17e2 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:40 functional-889240 crio[5881]: time="2025-10-03T18:26:40.925769076Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=a468d190-7222-4bd4-b6f0-08b15003496b name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:40 functional-889240 crio[5881]: time="2025-10-03T18:26:40.926574524Z" level=info msg="Creating container: kube-system/etcd-functional-889240/etcd" id=9a93a6e4-ad7e-48e5-a86a-fc4ec0b7612b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:40 functional-889240 crio[5881]: time="2025-10-03T18:26:40.926842103Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:26:40 functional-889240 crio[5881]: time="2025-10-03T18:26:40.931090446Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:26:40 functional-889240 crio[5881]: time="2025-10-03T18:26:40.931488821Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:26:40 functional-889240 crio[5881]: time="2025-10-03T18:26:40.947789662Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=9a93a6e4-ad7e-48e5-a86a-fc4ec0b7612b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:40 functional-889240 crio[5881]: time="2025-10-03T18:26:40.94915377Z" level=info msg="createCtr: deleting container ID fc25ddc2cae4957d13811e0d2971b92e2b7bac4ed5db09337f90301d1b9c8720 from idIndex" id=9a93a6e4-ad7e-48e5-a86a-fc4ec0b7612b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:40 functional-889240 crio[5881]: time="2025-10-03T18:26:40.949189492Z" level=info msg="createCtr: removing container fc25ddc2cae4957d13811e0d2971b92e2b7bac4ed5db09337f90301d1b9c8720" id=9a93a6e4-ad7e-48e5-a86a-fc4ec0b7612b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:40 functional-889240 crio[5881]: time="2025-10-03T18:26:40.949219845Z" level=info msg="createCtr: deleting container fc25ddc2cae4957d13811e0d2971b92e2b7bac4ed5db09337f90301d1b9c8720 from storage" id=9a93a6e4-ad7e-48e5-a86a-fc4ec0b7612b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:40 functional-889240 crio[5881]: time="2025-10-03T18:26:40.951628509Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-889240_kube-system_a73daf0147d5280c6db538ca59db9fe0_0" id=9a93a6e4-ad7e-48e5-a86a-fc4ec0b7612b name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:26:41.848362   15752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:41.848863   15752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:41.850433   15752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:41.850879   15752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:41.852400   15752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 3 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001870] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.374530] i8042: Warning: Keylock active
	[  +0.010846] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003424] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000658] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000637] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000691] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.479345] block sda: the capability attribute has been deprecated.
	[  +0.086934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025583] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.992810] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:26:41 up  1:09,  0 user,  load average: 0.08, 0.06, 0.04
	Linux functional-889240 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 03 18:26:34 functional-889240 kubelet[15004]:  > logger="UnhandledError"
	Oct 03 18:26:34 functional-889240 kubelet[15004]: E1003 18:26:34.950800   15004 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-889240" podUID="7e715cb6024854d45a9fa99576167e43"
	Oct 03 18:26:36 functional-889240 kubelet[15004]: E1003 18:26:36.546620   15004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-889240?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 03 18:26:36 functional-889240 kubelet[15004]: I1003 18:26:36.696467   15004 kubelet_node_status.go:75] "Attempting to register node" node="functional-889240"
	Oct 03 18:26:36 functional-889240 kubelet[15004]: E1003 18:26:36.696823   15004 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-889240"
	Oct 03 18:26:36 functional-889240 kubelet[15004]: E1003 18:26:36.924300   15004 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-889240\" not found" node="functional-889240"
	Oct 03 18:26:36 functional-889240 kubelet[15004]: E1003 18:26:36.948400   15004 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:26:36 functional-889240 kubelet[15004]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:26:36 functional-889240 kubelet[15004]:  > podSandboxID="d2a1f7a262459adddcbc8998558ca80ae50f332cedd95d5813e79fa17642c365"
	Oct 03 18:26:36 functional-889240 kubelet[15004]: E1003 18:26:36.948480   15004 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:26:36 functional-889240 kubelet[15004]:         container kube-apiserver start failed in pod kube-apiserver-functional-889240_kube-system(9d9b7aefd7427246dd018814b6979298): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:26:36 functional-889240 kubelet[15004]:  > logger="UnhandledError"
	Oct 03 18:26:36 functional-889240 kubelet[15004]: E1003 18:26:36.948509   15004 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-889240" podUID="9d9b7aefd7427246dd018814b6979298"
	Oct 03 18:26:37 functional-889240 kubelet[15004]: E1003 18:26:37.310073   15004 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-889240.186b0e42e698a181  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-889240,UID:functional-889240,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-889240 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-889240,},FirstTimestamp:2025-10-03 18:22:39.917703553 +0000 UTC m=+1.131431312,LastTimestamp:2025-10-03 18:22:39.917703553 +0000 UTC m=+1.131431312,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-889240,}"
	Oct 03 18:26:39 functional-889240 kubelet[15004]: E1003 18:26:39.261580   15004 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 03 18:26:39 functional-889240 kubelet[15004]: E1003 18:26:39.696867   15004 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 03 18:26:39 functional-889240 kubelet[15004]: E1003 18:26:39.939428   15004 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-889240\" not found"
	Oct 03 18:26:40 functional-889240 kubelet[15004]: E1003 18:26:40.924607   15004 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-889240\" not found" node="functional-889240"
	Oct 03 18:26:40 functional-889240 kubelet[15004]: E1003 18:26:40.951926   15004 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:26:40 functional-889240 kubelet[15004]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:26:40 functional-889240 kubelet[15004]:  > podSandboxID="816bf4aaa4990184bdc95c0d86d21e6c5c4acf1f357b2bf3229d2f1f717980c8"
	Oct 03 18:26:40 functional-889240 kubelet[15004]: E1003 18:26:40.952038   15004 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:26:40 functional-889240 kubelet[15004]:         container etcd start failed in pod etcd-functional-889240_kube-system(a73daf0147d5280c6db538ca59db9fe0): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:26:40 functional-889240 kubelet[15004]:  > logger="UnhandledError"
	Oct 03 18:26:40 functional-889240 kubelet[15004]: E1003 18:26:40.952069   15004 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-889240" podUID="a73daf0147d5280c6db538ca59db9fe0"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-889240 -n functional-889240
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-889240 -n functional-889240: exit status 2 (300.906254ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-889240" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (733.90s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-889240 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-889240 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (49.68821ms)

                                                
                                                
** stderr ** 
	E1003 18:26:42.622564   51687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:42.623055   51687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:42.624516   51687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:42.624895   51687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:42.626274   51687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-889240 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-889240
helpers_test.go:243: (dbg) docker inspect functional-889240:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a",
	        "Created": "2025-10-03T17:59:56.619817507Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 26766,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T17:59:56.652603806Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/hostname",
	        "HostsPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/hosts",
	        "LogPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a-json.log",
	        "Name": "/functional-889240",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-889240:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-889240",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a",
	                "LowerDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2-init/diff:/var/lib/docker/overlay2/6a517a7375440eba803d7b83fe1e0821915758396dd4d8556ab64fff322a60c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-889240",
	                "Source": "/var/lib/docker/volumes/functional-889240/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-889240",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-889240",
	                "name.minikube.sigs.k8s.io": "functional-889240",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "da15d31dc23bdd4694ae9e3b61015d7ce0d61668c73d3e386422834c6f0321d8",
	            "SandboxKey": "/var/run/docker/netns/da15d31dc23b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-889240": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:9e:1d:e9:d9:ce",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "03281bed183d0817c0bc237b5c25093fc10222138aedde4c7deef5823759fa24",
	                    "EndpointID": "28fa584fdd6e253816ae08a2460ef02b91085c8a7996d55008876e3bd65bbc7e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-889240",
	                        "9f4f0f10b4a9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-889240 -n functional-889240
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-889240 -n functional-889240: exit status 2 (295.562455ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 logs -n 25
helpers_test.go:260: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ unpause │ nospam-093146 --log_dir /tmp/nospam-093146 unpause                                                            │ nospam-093146     │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ unpause │ nospam-093146 --log_dir /tmp/nospam-093146 unpause                                                            │ nospam-093146     │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ unpause │ nospam-093146 --log_dir /tmp/nospam-093146 unpause                                                            │ nospam-093146     │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ stop    │ nospam-093146 --log_dir /tmp/nospam-093146 stop                                                               │ nospam-093146     │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ stop    │ nospam-093146 --log_dir /tmp/nospam-093146 stop                                                               │ nospam-093146     │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ stop    │ nospam-093146 --log_dir /tmp/nospam-093146 stop                                                               │ nospam-093146     │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ delete  │ -p nospam-093146                                                                                              │ nospam-093146     │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │ 03 Oct 25 17:59 UTC │
	│ start   │ -p functional-889240 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 17:59 UTC │                     │
	│ start   │ -p functional-889240 --alsologtostderr -v=8                                                                   │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:08 UTC │                     │
	│ cache   │ functional-889240 cache add registry.k8s.io/pause:3.1                                                         │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ cache   │ functional-889240 cache add registry.k8s.io/pause:3.3                                                         │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ cache   │ functional-889240 cache add registry.k8s.io/pause:latest                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ cache   │ functional-889240 cache add minikube-local-cache-test:functional-889240                                       │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ cache   │ functional-889240 cache delete minikube-local-cache-test:functional-889240                                    │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ ssh     │ functional-889240 ssh sudo crictl images                                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ ssh     │ functional-889240 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ ssh     │ functional-889240 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │                     │
	│ cache   │ functional-889240 cache reload                                                                                │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ ssh     │ functional-889240 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ kubectl │ functional-889240 kubectl -- --context functional-889240 get pods                                             │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │                     │
	│ start   │ -p functional-889240 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:14:28
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:14:28.726754   38063 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:14:28.726997   38063 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:14:28.727000   38063 out.go:374] Setting ErrFile to fd 2...
	I1003 18:14:28.727003   38063 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:14:28.727268   38063 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:14:28.727968   38063 out.go:368] Setting JSON to false
	I1003 18:14:28.729004   38063 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3420,"bootTime":1759511849,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:14:28.729075   38063 start.go:140] virtualization: kvm guest
	I1003 18:14:28.731008   38063 out.go:179] * [functional-889240] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 18:14:28.732488   38063 notify.go:220] Checking for updates...
	I1003 18:14:28.732492   38063 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:14:28.733579   38063 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:14:28.734939   38063 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:14:28.736179   38063 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:14:28.737411   38063 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:14:28.738587   38063 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:14:28.740087   38063 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:14:28.740180   38063 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:14:28.764594   38063 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:14:28.764685   38063 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:14:28.818292   38063 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-03 18:14:28.807876558 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:14:28.818395   38063 docker.go:318] overlay module found
	I1003 18:14:28.820263   38063 out.go:179] * Using the docker driver based on existing profile
	I1003 18:14:28.821380   38063 start.go:304] selected driver: docker
	I1003 18:14:28.821386   38063 start.go:924] validating driver "docker" against &{Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:14:28.821453   38063 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:14:28.821525   38063 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:14:28.873759   38063 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-03 18:14:28.863222744 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:14:28.874408   38063 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:14:28.874443   38063 cni.go:84] Creating CNI manager for ""
	I1003 18:14:28.874490   38063 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 18:14:28.874537   38063 start.go:348] cluster config:
	{Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:14:28.876500   38063 out.go:179] * Starting "functional-889240" primary control-plane node in "functional-889240" cluster
	I1003 18:14:28.877706   38063 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:14:28.878837   38063 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:14:28.879769   38063 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:14:28.879795   38063 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 18:14:28.879802   38063 cache.go:58] Caching tarball of preloaded images
	I1003 18:14:28.879865   38063 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:14:28.879873   38063 preload.go:233] Found /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1003 18:14:28.879879   38063 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 18:14:28.879967   38063 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/config.json ...
	I1003 18:14:28.899017   38063 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 18:14:28.899026   38063 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 18:14:28.899040   38063 cache.go:232] Successfully downloaded all kic artifacts
	I1003 18:14:28.899069   38063 start.go:360] acquireMachinesLock for functional-889240: {Name:mk6750a9fb1c1c3747b0abf2aebe2a2d0047ae3a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:14:28.899117   38063 start.go:364] duration metric: took 35.993µs to acquireMachinesLock for "functional-889240"
	I1003 18:14:28.899130   38063 start.go:96] Skipping create...Using existing machine configuration
	I1003 18:14:28.899133   38063 fix.go:54] fixHost starting: 
	I1003 18:14:28.899327   38063 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
	I1003 18:14:28.916111   38063 fix.go:112] recreateIfNeeded on functional-889240: state=Running err=<nil>
	W1003 18:14:28.916134   38063 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 18:14:28.918050   38063 out.go:252] * Updating the running docker "functional-889240" container ...
	I1003 18:14:28.918084   38063 machine.go:93] provisionDockerMachine start ...
	I1003 18:14:28.918165   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:28.934689   38063 main.go:141] libmachine: Using SSH client type: native
	I1003 18:14:28.934913   38063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:14:28.934921   38063 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 18:14:29.076697   38063 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-889240
	
	I1003 18:14:29.076727   38063 ubuntu.go:182] provisioning hostname "functional-889240"
	I1003 18:14:29.076782   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:29.092887   38063 main.go:141] libmachine: Using SSH client type: native
	I1003 18:14:29.093101   38063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:14:29.093108   38063 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-889240 && echo "functional-889240" | sudo tee /etc/hostname
	I1003 18:14:29.242886   38063 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-889240
	
	I1003 18:14:29.242996   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:29.260006   38063 main.go:141] libmachine: Using SSH client type: native
	I1003 18:14:29.260203   38063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:14:29.260220   38063 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-889240' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-889240/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-889240' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:14:29.401432   38063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:14:29.401463   38063 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8669/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8669/.minikube}
	I1003 18:14:29.401485   38063 ubuntu.go:190] setting up certificates
	I1003 18:14:29.401496   38063 provision.go:84] configureAuth start
	I1003 18:14:29.401542   38063 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-889240
	I1003 18:14:29.417679   38063 provision.go:143] copyHostCerts
	I1003 18:14:29.417732   38063 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem, removing ...
	I1003 18:14:29.417754   38063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:14:29.417818   38063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem (1082 bytes)
	I1003 18:14:29.417930   38063 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem, removing ...
	I1003 18:14:29.417934   38063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:14:29.417959   38063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem (1123 bytes)
	I1003 18:14:29.418062   38063 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem, removing ...
	I1003 18:14:29.418066   38063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:14:29.418091   38063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem (1675 bytes)
	I1003 18:14:29.418151   38063 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem org=jenkins.functional-889240 san=[127.0.0.1 192.168.49.2 functional-889240 localhost minikube]
	I1003 18:14:29.517156   38063 provision.go:177] copyRemoteCerts
	I1003 18:14:29.517211   38063 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:14:29.517244   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:29.534610   38063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:14:29.634576   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 18:14:29.651152   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1003 18:14:29.667404   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 18:14:29.683300   38063 provision.go:87] duration metric: took 281.795524ms to configureAuth
	I1003 18:14:29.683315   38063 ubuntu.go:206] setting minikube options for container-runtime
	I1003 18:14:29.683451   38063 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:14:29.683536   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:29.701238   38063 main.go:141] libmachine: Using SSH client type: native
	I1003 18:14:29.701444   38063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:14:29.701460   38063 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 18:14:29.964774   38063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 18:14:29.964789   38063 machine.go:96] duration metric: took 1.046699275s to provisionDockerMachine
	I1003 18:14:29.964799   38063 start.go:293] postStartSetup for "functional-889240" (driver="docker")
	I1003 18:14:29.964807   38063 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:14:29.964862   38063 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:14:29.964919   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:29.982141   38063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:14:30.082849   38063 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:14:30.086167   38063 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:14:30.086182   38063 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 18:14:30.086190   38063 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/addons for local assets ...
	I1003 18:14:30.086245   38063 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/files for local assets ...
	I1003 18:14:30.086322   38063 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> 122122.pem in /etc/ssl/certs
	I1003 18:14:30.086390   38063 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/test/nested/copy/12212/hosts -> hosts in /etc/test/nested/copy/12212
	I1003 18:14:30.086418   38063 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/12212
	I1003 18:14:30.093540   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:14:30.109775   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/test/nested/copy/12212/hosts --> /etc/test/nested/copy/12212/hosts (40 bytes)
	I1003 18:14:30.125563   38063 start.go:296] duration metric: took 160.752264ms for postStartSetup
	I1003 18:14:30.125613   38063 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:14:30.125652   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:30.142705   38063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:14:30.239819   38063 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:14:30.244462   38063 fix.go:56] duration metric: took 1.345323072s for fixHost
	I1003 18:14:30.244476   38063 start.go:83] releasing machines lock for "functional-889240", held for 1.345352654s
	I1003 18:14:30.244534   38063 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-889240
	I1003 18:14:30.261148   38063 ssh_runner.go:195] Run: cat /version.json
	I1003 18:14:30.261181   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:30.261277   38063 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:14:30.261317   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:30.278533   38063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:14:30.278911   38063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:14:30.374843   38063 ssh_runner.go:195] Run: systemctl --version
	I1003 18:14:30.426119   38063 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 18:14:30.460148   38063 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 18:14:30.464555   38063 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 18:14:30.464600   38063 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 18:14:30.471950   38063 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1003 18:14:30.471961   38063 start.go:495] detecting cgroup driver to use...
	I1003 18:14:30.472000   38063 detect.go:190] detected "systemd" cgroup driver on host os
	I1003 18:14:30.472044   38063 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 18:14:30.485257   38063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 18:14:30.496477   38063 docker.go:218] disabling cri-docker service (if available) ...
	I1003 18:14:30.496516   38063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 18:14:30.510101   38063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 18:14:30.521418   38063 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 18:14:30.603143   38063 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 18:14:30.686683   38063 docker.go:234] disabling docker service ...
	I1003 18:14:30.686723   38063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 18:14:30.700010   38063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 18:14:30.711397   38063 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 18:14:30.789401   38063 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 18:14:30.867745   38063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 18:14:30.879595   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:14:30.892654   38063 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 18:14:30.892698   38063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:14:30.901033   38063 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 18:14:30.901080   38063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:14:30.909297   38063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:14:30.917346   38063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:14:30.925200   38063 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:14:30.932963   38063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:14:30.941075   38063 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:14:30.948857   38063 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:14:30.956661   38063 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:14:30.963293   38063 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:14:30.969876   38063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:14:31.048833   38063 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 18:14:31.154686   38063 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 18:14:31.154732   38063 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 18:14:31.158463   38063 start.go:563] Will wait 60s for crictl version
	I1003 18:14:31.158505   38063 ssh_runner.go:195] Run: which crictl
	I1003 18:14:31.161802   38063 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 18:14:31.185028   38063 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 18:14:31.185099   38063 ssh_runner.go:195] Run: crio --version
	I1003 18:14:31.211351   38063 ssh_runner.go:195] Run: crio --version
	I1003 18:14:31.239599   38063 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 18:14:31.241121   38063 cli_runner.go:164] Run: docker network inspect functional-889240 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:14:31.257340   38063 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 18:14:31.263166   38063 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1003 18:14:31.264167   38063 kubeadm.go:883] updating cluster {Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 18:14:31.264267   38063 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:14:31.264310   38063 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:14:31.293848   38063 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:14:31.293858   38063 crio.go:433] Images already preloaded, skipping extraction
	I1003 18:14:31.293907   38063 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:14:31.319316   38063 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:14:31.319326   38063 cache_images.go:85] Images are preloaded, skipping loading
	I1003 18:14:31.319331   38063 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1003 18:14:31.319423   38063 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-889240 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 18:14:31.319482   38063 ssh_runner.go:195] Run: crio config
	I1003 18:14:31.363053   38063 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1003 18:14:31.363070   38063 cni.go:84] Creating CNI manager for ""
	I1003 18:14:31.363079   38063 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 18:14:31.363097   38063 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 18:14:31.363115   38063 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-889240 NodeName:functional-889240 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 18:14:31.363211   38063 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-889240"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:14:31.363260   38063 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 18:14:31.371060   38063 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:14:31.371113   38063 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 18:14:31.378260   38063 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1003 18:14:31.389622   38063 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 18:14:31.401169   38063 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1003 18:14:31.413278   38063 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1003 18:14:31.416670   38063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:14:31.493997   38063 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:14:31.506325   38063 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240 for IP: 192.168.49.2
	I1003 18:14:31.506337   38063 certs.go:195] generating shared ca certs ...
	I1003 18:14:31.506355   38063 certs.go:227] acquiring lock for ca certs: {Name:mk92d1e8e469cb44d9924ff8abf5ecf0a8ce4e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:14:31.506504   38063 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key
	I1003 18:14:31.506539   38063 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key
	I1003 18:14:31.506544   38063 certs.go:257] generating profile certs ...
	I1003 18:14:31.506611   38063 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.key
	I1003 18:14:31.506654   38063 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.key.eb3f8f7c
	I1003 18:14:31.506684   38063 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.key
	I1003 18:14:31.506800   38063 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem (1338 bytes)
	W1003 18:14:31.506838   38063 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212_empty.pem, impossibly tiny 0 bytes
	I1003 18:14:31.506844   38063 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:14:31.506863   38063 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:14:31.506885   38063 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:14:31.506914   38063 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem (1675 bytes)
	I1003 18:14:31.506949   38063 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:14:31.507555   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:14:31.523949   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 18:14:31.540075   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:14:31.556229   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:14:31.572472   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 18:14:31.588618   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:14:31.604606   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:14:31.620082   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 18:14:31.636014   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:14:31.652102   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem --> /usr/share/ca-certificates/12212.pem (1338 bytes)
	I1003 18:14:31.668081   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /usr/share/ca-certificates/122122.pem (1708 bytes)
	I1003 18:14:31.684503   38063 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:14:31.696104   38063 ssh_runner.go:195] Run: openssl version
	I1003 18:14:31.701806   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:14:31.709474   38063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:14:31.712729   38063 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:14:31.712776   38063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:14:31.746262   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:14:31.754238   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12212.pem && ln -fs /usr/share/ca-certificates/12212.pem /etc/ssl/certs/12212.pem"
	I1003 18:14:31.762041   38063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12212.pem
	I1003 18:14:31.765354   38063 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:14:31.765385   38063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12212.pem
	I1003 18:14:31.799341   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12212.pem /etc/ssl/certs/51391683.0"
	I1003 18:14:31.807532   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122122.pem && ln -fs /usr/share/ca-certificates/122122.pem /etc/ssl/certs/122122.pem"
	I1003 18:14:31.815668   38063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122122.pem
	I1003 18:14:31.819149   38063 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:14:31.819195   38063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122122.pem
	I1003 18:14:31.853378   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122122.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:14:31.861557   38063 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:14:31.865026   38063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 18:14:31.898216   38063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 18:14:31.931439   38063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 18:14:31.964848   38063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 18:14:31.997996   38063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 18:14:32.031331   38063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 18:14:32.064773   38063 kubeadm.go:400] StartCluster: {Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:14:32.064844   38063 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:14:32.064884   38063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:14:32.091563   38063 cri.go:89] found id: ""
	I1003 18:14:32.091628   38063 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:14:32.099575   38063 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1003 18:14:32.099617   38063 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1003 18:14:32.099649   38063 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 18:14:32.106476   38063 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:14:32.106922   38063 kubeconfig.go:125] found "functional-889240" server: "https://192.168.49.2:8441"
	I1003 18:14:32.108169   38063 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 18:14:32.115724   38063 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-03 18:00:01.716218369 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-03 18:14:31.411258298 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1003 18:14:32.115731   38063 kubeadm.go:1160] stopping kube-system containers ...
	I1003 18:14:32.115740   38063 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1003 18:14:32.115779   38063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:14:32.142745   38063 cri.go:89] found id: ""
	I1003 18:14:32.142803   38063 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1003 18:14:32.181602   38063 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:14:32.189432   38063 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct  3 18:04 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Oct  3 18:04 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Oct  3 18:04 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct  3 18:04 /etc/kubernetes/scheduler.conf
	
	I1003 18:14:32.189481   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1003 18:14:32.196894   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1003 18:14:32.203921   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:14:32.203965   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:14:32.210881   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1003 18:14:32.217766   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:14:32.217803   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:14:32.224334   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1003 18:14:32.231030   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:14:32.231065   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:14:32.237472   38063 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 18:14:32.244457   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1003 18:14:32.283268   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1003 18:14:33.742947   38063 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.459652347s)
	I1003 18:14:33.743017   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1003 18:14:33.898116   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1003 18:14:33.942573   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1003 18:14:33.988522   38063 api_server.go:52] waiting for apiserver process to appear ...
	I1003 18:14:33.988576   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:34.488790   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:34.989160   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:35.489680   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:35.988868   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:36.488719   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:36.989189   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:37.488931   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:37.988689   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:38.489192   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:38.988747   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:39.488853   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:39.988726   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:40.488885   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:40.988836   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:41.489087   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:41.989102   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:42.489308   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:42.989350   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:43.489437   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:43.989370   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:44.489479   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:44.989473   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:45.489475   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:45.989163   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:46.489071   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:46.989061   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:47.489362   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:47.989160   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:48.489058   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:48.989044   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:49.489308   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:49.989261   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:50.489305   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:50.989055   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:51.488843   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:51.989620   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:52.489351   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:52.989238   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:53.489255   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:53.989220   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:54.488852   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:54.988693   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:55.488676   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:55.989529   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:56.488743   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:56.988770   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:57.489696   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:57.989499   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:58.489418   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:58.988677   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:59.488958   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:59.988929   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:00.488655   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:00.989293   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:01.489448   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:01.989466   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:02.489205   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:02.989600   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:03.489423   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:03.989351   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:04.489050   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:04.989610   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:05.489685   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:05.988959   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:06.488882   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:06.988912   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:07.488777   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:07.988801   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:08.489543   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:08.989468   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:09.489298   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:09.989123   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:10.489003   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:10.988801   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:11.489568   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:11.989184   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:12.489371   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:12.989143   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:13.488941   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:13.988874   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:14.489673   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:14.989633   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:15.489486   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:15.989281   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:16.489642   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:16.989478   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:17.489111   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:17.989045   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:18.488802   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:18.988734   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:19.489569   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:19.989541   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:20.488747   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:20.989602   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:21.488839   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:21.989691   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:22.489669   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:22.989667   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:23.489632   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:23.989542   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:24.489501   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:24.989204   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:25.488757   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:25.989320   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:26.489097   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:26.988902   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:27.489585   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:27.989335   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:28.489024   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:28.988936   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:29.488782   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:29.989706   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:30.489391   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:30.989093   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:31.488928   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:31.988795   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:32.488796   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:32.988671   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:33.489525   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:33.989163   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:33.989216   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:34.014490   38063 cri.go:89] found id: ""
	I1003 18:15:34.014506   38063 logs.go:282] 0 containers: []
	W1003 18:15:34.014513   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:34.014518   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:34.014556   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:34.039203   38063 cri.go:89] found id: ""
	I1003 18:15:34.039217   38063 logs.go:282] 0 containers: []
	W1003 18:15:34.039223   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:34.039227   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:34.039266   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:34.064423   38063 cri.go:89] found id: ""
	I1003 18:15:34.064440   38063 logs.go:282] 0 containers: []
	W1003 18:15:34.064448   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:34.064452   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:34.064494   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:34.089636   38063 cri.go:89] found id: ""
	I1003 18:15:34.089650   38063 logs.go:282] 0 containers: []
	W1003 18:15:34.089661   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:34.089665   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:34.089707   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:34.114198   38063 cri.go:89] found id: ""
	I1003 18:15:34.114211   38063 logs.go:282] 0 containers: []
	W1003 18:15:34.114217   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:34.114221   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:34.114261   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:34.138167   38063 cri.go:89] found id: ""
	I1003 18:15:34.138180   38063 logs.go:282] 0 containers: []
	W1003 18:15:34.138186   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:34.138190   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:34.138234   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:34.163057   38063 cri.go:89] found id: ""
	I1003 18:15:34.163071   38063 logs.go:282] 0 containers: []
	W1003 18:15:34.163079   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:34.163090   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:34.163102   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:34.230868   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:34.230885   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:34.242117   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:34.242134   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:34.296197   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:34.289745    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:34.290228    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:34.291731    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:34.292260    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:34.293746    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:34.289745    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:34.290228    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:34.291731    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:34.292260    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:34.293746    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:15:34.296208   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:34.296218   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:34.353696   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:34.353715   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:36.882850   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:36.893827   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:36.893878   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:36.918928   38063 cri.go:89] found id: ""
	I1003 18:15:36.918945   38063 logs.go:282] 0 containers: []
	W1003 18:15:36.918954   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:36.918960   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:36.919024   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:36.943500   38063 cri.go:89] found id: ""
	I1003 18:15:36.943516   38063 logs.go:282] 0 containers: []
	W1003 18:15:36.943524   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:36.943529   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:36.943571   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:36.967892   38063 cri.go:89] found id: ""
	I1003 18:15:36.967909   38063 logs.go:282] 0 containers: []
	W1003 18:15:36.967917   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:36.967921   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:36.967961   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:36.992302   38063 cri.go:89] found id: ""
	I1003 18:15:36.992316   38063 logs.go:282] 0 containers: []
	W1003 18:15:36.992322   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:36.992326   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:36.992371   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:37.017414   38063 cri.go:89] found id: ""
	I1003 18:15:37.017429   38063 logs.go:282] 0 containers: []
	W1003 18:15:37.017435   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:37.017440   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:37.017483   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:37.042577   38063 cri.go:89] found id: ""
	I1003 18:15:37.042596   38063 logs.go:282] 0 containers: []
	W1003 18:15:37.042601   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:37.042606   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:37.042652   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:37.067424   38063 cri.go:89] found id: ""
	I1003 18:15:37.067438   38063 logs.go:282] 0 containers: []
	W1003 18:15:37.067444   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:37.067451   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:37.067466   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:37.133058   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:37.133076   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:37.144095   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:37.144109   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:37.201432   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:37.195051    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:37.195552    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:37.197089    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:37.197493    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:37.198600    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:37.195051    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:37.195552    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:37.197089    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:37.197493    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:37.198600    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:15:37.201453   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:37.201464   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:37.264020   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:37.264041   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:39.793917   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:39.804160   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:39.804201   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:39.828532   38063 cri.go:89] found id: ""
	I1003 18:15:39.828545   38063 logs.go:282] 0 containers: []
	W1003 18:15:39.828551   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:39.828557   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:39.828595   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:39.854181   38063 cri.go:89] found id: ""
	I1003 18:15:39.854194   38063 logs.go:282] 0 containers: []
	W1003 18:15:39.854199   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:39.854203   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:39.854241   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:39.878636   38063 cri.go:89] found id: ""
	I1003 18:15:39.878649   38063 logs.go:282] 0 containers: []
	W1003 18:15:39.878655   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:39.878665   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:39.878714   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:39.903647   38063 cri.go:89] found id: ""
	I1003 18:15:39.903662   38063 logs.go:282] 0 containers: []
	W1003 18:15:39.903672   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:39.903678   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:39.903727   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:39.928358   38063 cri.go:89] found id: ""
	I1003 18:15:39.928371   38063 logs.go:282] 0 containers: []
	W1003 18:15:39.928377   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:39.928382   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:39.928425   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:39.952698   38063 cri.go:89] found id: ""
	I1003 18:15:39.952712   38063 logs.go:282] 0 containers: []
	W1003 18:15:39.952718   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:39.952722   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:39.952770   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:39.977762   38063 cri.go:89] found id: ""
	I1003 18:15:39.977779   38063 logs.go:282] 0 containers: []
	W1003 18:15:39.977788   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:39.977798   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:39.977810   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:40.047503   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:40.047521   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:40.058597   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:40.058612   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:40.113456   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:40.107101    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:40.107593    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:40.109120    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:40.109527    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:40.111020    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:40.107101    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:40.107593    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:40.109120    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:40.109527    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:40.111020    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:15:40.113474   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:40.113485   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:40.173884   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:40.173904   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:42.702098   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:42.712135   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:42.712176   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:42.735423   38063 cri.go:89] found id: ""
	I1003 18:15:42.735438   38063 logs.go:282] 0 containers: []
	W1003 18:15:42.735445   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:42.735450   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:42.735502   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:42.758834   38063 cri.go:89] found id: ""
	I1003 18:15:42.758847   38063 logs.go:282] 0 containers: []
	W1003 18:15:42.758853   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:42.758857   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:42.758918   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:42.782548   38063 cri.go:89] found id: ""
	I1003 18:15:42.782564   38063 logs.go:282] 0 containers: []
	W1003 18:15:42.782573   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:42.782578   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:42.782631   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:42.808289   38063 cri.go:89] found id: ""
	I1003 18:15:42.808307   38063 logs.go:282] 0 containers: []
	W1003 18:15:42.808315   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:42.808321   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:42.808362   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:42.832106   38063 cri.go:89] found id: ""
	I1003 18:15:42.832120   38063 logs.go:282] 0 containers: []
	W1003 18:15:42.832126   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:42.832136   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:42.832178   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:42.856681   38063 cri.go:89] found id: ""
	I1003 18:15:42.856697   38063 logs.go:282] 0 containers: []
	W1003 18:15:42.856704   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:42.856708   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:42.856753   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:42.880778   38063 cri.go:89] found id: ""
	I1003 18:15:42.880793   38063 logs.go:282] 0 containers: []
	W1003 18:15:42.880799   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:42.880806   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:42.880815   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:42.891568   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:42.891591   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:42.944856   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:42.938479    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:42.938960    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:42.940463    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:42.940834    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:42.942358    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:42.938479    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:42.938960    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:42.940463    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:42.940834    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:42.942358    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:15:42.944869   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:42.944883   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:43.008325   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:43.008342   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:43.034919   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:43.034934   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:45.601892   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:45.612293   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:45.612337   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:45.636800   38063 cri.go:89] found id: ""
	I1003 18:15:45.636816   38063 logs.go:282] 0 containers: []
	W1003 18:15:45.636825   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:45.636831   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:45.636897   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:45.663419   38063 cri.go:89] found id: ""
	I1003 18:15:45.663431   38063 logs.go:282] 0 containers: []
	W1003 18:15:45.663442   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:45.663446   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:45.663484   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:45.688326   38063 cri.go:89] found id: ""
	I1003 18:15:45.688340   38063 logs.go:282] 0 containers: []
	W1003 18:15:45.688346   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:45.688350   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:45.688390   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:45.713903   38063 cri.go:89] found id: ""
	I1003 18:15:45.713916   38063 logs.go:282] 0 containers: []
	W1003 18:15:45.713923   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:45.713929   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:45.713969   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:45.738540   38063 cri.go:89] found id: ""
	I1003 18:15:45.738554   38063 logs.go:282] 0 containers: []
	W1003 18:15:45.738560   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:45.738565   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:45.738626   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:45.763029   38063 cri.go:89] found id: ""
	I1003 18:15:45.763042   38063 logs.go:282] 0 containers: []
	W1003 18:15:45.763049   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:45.763054   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:45.763105   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:45.787593   38063 cri.go:89] found id: ""
	I1003 18:15:45.787605   38063 logs.go:282] 0 containers: []
	W1003 18:15:45.787613   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:45.787619   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:45.787628   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:45.814410   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:45.814426   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:45.879690   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:45.879708   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:45.890632   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:45.890646   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:45.945900   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:45.939503    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:45.940097    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:45.941591    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:45.942022    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:45.943469    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:45.939503    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:45.940097    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:45.941591    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:45.942022    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:45.943469    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:15:45.945911   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:45.945920   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:48.510685   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:48.520989   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:48.521030   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:48.545850   38063 cri.go:89] found id: ""
	I1003 18:15:48.545863   38063 logs.go:282] 0 containers: []
	W1003 18:15:48.545871   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:48.545875   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:48.545917   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:48.570678   38063 cri.go:89] found id: ""
	I1003 18:15:48.570691   38063 logs.go:282] 0 containers: []
	W1003 18:15:48.570699   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:48.570704   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:48.570758   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:48.594906   38063 cri.go:89] found id: ""
	I1003 18:15:48.594922   38063 logs.go:282] 0 containers: []
	W1003 18:15:48.594931   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:48.594936   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:48.595011   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:48.620934   38063 cri.go:89] found id: ""
	I1003 18:15:48.620951   38063 logs.go:282] 0 containers: []
	W1003 18:15:48.620958   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:48.620963   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:48.621033   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:48.645916   38063 cri.go:89] found id: ""
	I1003 18:15:48.645933   38063 logs.go:282] 0 containers: []
	W1003 18:15:48.645942   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:48.645947   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:48.646009   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:48.670919   38063 cri.go:89] found id: ""
	I1003 18:15:48.670932   38063 logs.go:282] 0 containers: []
	W1003 18:15:48.670939   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:48.670944   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:48.671004   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:48.695257   38063 cri.go:89] found id: ""
	I1003 18:15:48.695274   38063 logs.go:282] 0 containers: []
	W1003 18:15:48.695281   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:48.695289   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:48.695298   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:48.723183   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:48.723198   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:48.790906   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:48.790924   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:48.802517   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:48.802531   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:48.858274   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:48.851795    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:48.852286    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:48.853794    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:48.854187    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:48.855729    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:48.851795    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:48.852286    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:48.853794    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:48.854187    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:48.855729    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:15:48.858294   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:48.858309   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:51.418365   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:51.428790   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:51.428851   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:51.453214   38063 cri.go:89] found id: ""
	I1003 18:15:51.453228   38063 logs.go:282] 0 containers: []
	W1003 18:15:51.453235   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:51.453241   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:51.453302   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:51.478216   38063 cri.go:89] found id: ""
	I1003 18:15:51.478231   38063 logs.go:282] 0 containers: []
	W1003 18:15:51.478241   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:51.478247   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:51.478298   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:51.503301   38063 cri.go:89] found id: ""
	I1003 18:15:51.503316   38063 logs.go:282] 0 containers: []
	W1003 18:15:51.503322   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:51.503327   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:51.503368   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:51.528130   38063 cri.go:89] found id: ""
	I1003 18:15:51.528146   38063 logs.go:282] 0 containers: []
	W1003 18:15:51.528152   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:51.528157   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:51.528196   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:51.553046   38063 cri.go:89] found id: ""
	I1003 18:15:51.553076   38063 logs.go:282] 0 containers: []
	W1003 18:15:51.553084   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:51.553091   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:51.553133   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:51.577406   38063 cri.go:89] found id: ""
	I1003 18:15:51.577420   38063 logs.go:282] 0 containers: []
	W1003 18:15:51.577426   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:51.577432   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:51.577471   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:51.602068   38063 cri.go:89] found id: ""
	I1003 18:15:51.602084   38063 logs.go:282] 0 containers: []
	W1003 18:15:51.602092   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:51.602102   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:51.602114   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:51.629035   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:51.629051   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:51.697997   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:51.698016   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:51.710748   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:51.710769   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:51.764330   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:51.757745    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:51.758298    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:51.759850    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:51.760310    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:51.761740    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:51.757745    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:51.758298    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:51.759850    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:51.760310    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:51.761740    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:15:51.764338   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:51.764348   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:54.323078   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:54.333510   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:54.333559   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:54.357777   38063 cri.go:89] found id: ""
	I1003 18:15:54.357790   38063 logs.go:282] 0 containers: []
	W1003 18:15:54.357796   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:54.357800   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:54.357841   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:54.381421   38063 cri.go:89] found id: ""
	I1003 18:15:54.381435   38063 logs.go:282] 0 containers: []
	W1003 18:15:54.381442   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:54.381447   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:54.381495   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:54.404951   38063 cri.go:89] found id: ""
	I1003 18:15:54.404969   38063 logs.go:282] 0 containers: []
	W1003 18:15:54.404991   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:54.404999   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:54.405045   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:54.429154   38063 cri.go:89] found id: ""
	I1003 18:15:54.429172   38063 logs.go:282] 0 containers: []
	W1003 18:15:54.429181   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:54.429186   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:54.429224   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:54.452874   38063 cri.go:89] found id: ""
	I1003 18:15:54.452895   38063 logs.go:282] 0 containers: []
	W1003 18:15:54.452903   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:54.452907   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:54.452946   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:54.477916   38063 cri.go:89] found id: ""
	I1003 18:15:54.477929   38063 logs.go:282] 0 containers: []
	W1003 18:15:54.477937   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:54.477942   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:54.478001   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:54.503676   38063 cri.go:89] found id: ""
	I1003 18:15:54.503692   38063 logs.go:282] 0 containers: []
	W1003 18:15:54.503699   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:54.503706   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:54.503716   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:54.571451   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:54.571469   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:54.582598   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:54.582614   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:54.635288   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:54.629106    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:54.629524    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:54.631026    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:54.631408    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:54.632845    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:54.629106    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:54.629524    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:54.631026    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:54.631408    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:54.632845    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:15:54.635301   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:54.635338   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:54.693328   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:54.693348   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:57.224616   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:57.234873   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:57.234916   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:57.259150   38063 cri.go:89] found id: ""
	I1003 18:15:57.259164   38063 logs.go:282] 0 containers: []
	W1003 18:15:57.259170   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:57.259175   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:57.259224   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:57.282636   38063 cri.go:89] found id: ""
	I1003 18:15:57.282650   38063 logs.go:282] 0 containers: []
	W1003 18:15:57.282662   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:57.282667   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:57.282716   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:57.307774   38063 cri.go:89] found id: ""
	I1003 18:15:57.307792   38063 logs.go:282] 0 containers: []
	W1003 18:15:57.307800   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:57.307806   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:57.307846   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:57.331087   38063 cri.go:89] found id: ""
	I1003 18:15:57.331101   38063 logs.go:282] 0 containers: []
	W1003 18:15:57.331107   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:57.331112   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:57.331153   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:57.356108   38063 cri.go:89] found id: ""
	I1003 18:15:57.356125   38063 logs.go:282] 0 containers: []
	W1003 18:15:57.356200   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:57.356209   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:57.356267   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:57.381138   38063 cri.go:89] found id: ""
	I1003 18:15:57.381154   38063 logs.go:282] 0 containers: []
	W1003 18:15:57.381161   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:57.381166   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:57.381206   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:57.405322   38063 cri.go:89] found id: ""
	I1003 18:15:57.405339   38063 logs.go:282] 0 containers: []
	W1003 18:15:57.405345   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:57.405353   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:57.405362   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:57.463330   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:57.463345   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:57.491754   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:57.491771   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:57.557710   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:57.557727   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:57.569135   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:57.569150   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:57.622275   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:57.615880    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:57.616369    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:57.617874    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:57.618325    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:57.619768    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:57.615880    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:57.616369    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:57.617874    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:57.618325    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:57.619768    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:00.123157   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:00.133350   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:00.133393   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:00.157946   38063 cri.go:89] found id: ""
	I1003 18:16:00.157958   38063 logs.go:282] 0 containers: []
	W1003 18:16:00.157965   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:00.157970   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:00.158035   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:00.182943   38063 cri.go:89] found id: ""
	I1003 18:16:00.182956   38063 logs.go:282] 0 containers: []
	W1003 18:16:00.182962   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:00.182967   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:00.183026   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:00.206834   38063 cri.go:89] found id: ""
	I1003 18:16:00.206848   38063 logs.go:282] 0 containers: []
	W1003 18:16:00.206854   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:00.206858   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:00.206901   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:00.231944   38063 cri.go:89] found id: ""
	I1003 18:16:00.231959   38063 logs.go:282] 0 containers: []
	W1003 18:16:00.231965   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:00.231970   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:00.232027   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:00.257587   38063 cri.go:89] found id: ""
	I1003 18:16:00.257607   38063 logs.go:282] 0 containers: []
	W1003 18:16:00.257613   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:00.257619   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:00.257662   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:00.281667   38063 cri.go:89] found id: ""
	I1003 18:16:00.281683   38063 logs.go:282] 0 containers: []
	W1003 18:16:00.281690   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:00.281694   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:00.281735   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:00.306161   38063 cri.go:89] found id: ""
	I1003 18:16:00.306173   38063 logs.go:282] 0 containers: []
	W1003 18:16:00.306183   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:00.306189   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:00.306199   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:00.334078   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:00.334094   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:00.398782   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:00.398800   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:00.410100   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:00.410118   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:00.464563   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:00.458004    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:00.458485    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:00.459956    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:00.460373    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:00.461844    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:00.458004    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:00.458485    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:00.459956    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:00.460373    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:00.461844    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:00.464573   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:00.464584   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:03.025201   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:03.035449   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:03.035489   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:03.060615   38063 cri.go:89] found id: ""
	I1003 18:16:03.060629   38063 logs.go:282] 0 containers: []
	W1003 18:16:03.060638   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:03.060644   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:03.060695   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:03.085028   38063 cri.go:89] found id: ""
	I1003 18:16:03.085041   38063 logs.go:282] 0 containers: []
	W1003 18:16:03.085047   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:03.085052   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:03.085101   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:03.109281   38063 cri.go:89] found id: ""
	I1003 18:16:03.109295   38063 logs.go:282] 0 containers: []
	W1003 18:16:03.109301   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:03.109306   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:03.109343   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:03.133199   38063 cri.go:89] found id: ""
	I1003 18:16:03.133212   38063 logs.go:282] 0 containers: []
	W1003 18:16:03.133218   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:03.133223   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:03.133271   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:03.157142   38063 cri.go:89] found id: ""
	I1003 18:16:03.157158   38063 logs.go:282] 0 containers: []
	W1003 18:16:03.157167   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:03.157174   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:03.157215   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:03.181156   38063 cri.go:89] found id: ""
	I1003 18:16:03.181170   38063 logs.go:282] 0 containers: []
	W1003 18:16:03.181177   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:03.181182   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:03.181225   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:03.207371   38063 cri.go:89] found id: ""
	I1003 18:16:03.207385   38063 logs.go:282] 0 containers: []
	W1003 18:16:03.207392   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:03.207399   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:03.207407   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:03.268072   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:03.268093   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:03.295655   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:03.295675   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:03.359095   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:03.359116   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:03.370093   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:03.370110   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:03.423681   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:03.416458    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:03.416947    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:03.419089    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:03.419495    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:03.421012    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:03.416458    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:03.416947    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:03.419089    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:03.419495    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:03.421012    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:05.925327   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:05.935882   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:05.935927   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:05.960833   38063 cri.go:89] found id: ""
	I1003 18:16:05.960850   38063 logs.go:282] 0 containers: []
	W1003 18:16:05.960858   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:05.960864   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:05.960918   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:05.985562   38063 cri.go:89] found id: ""
	I1003 18:16:05.985577   38063 logs.go:282] 0 containers: []
	W1003 18:16:05.985585   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:05.985592   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:05.985644   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:06.008796   38063 cri.go:89] found id: ""
	I1003 18:16:06.008813   38063 logs.go:282] 0 containers: []
	W1003 18:16:06.008822   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:06.008827   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:06.008865   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:06.034023   38063 cri.go:89] found id: ""
	I1003 18:16:06.034037   38063 logs.go:282] 0 containers: []
	W1003 18:16:06.034043   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:06.034048   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:06.034099   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:06.057314   38063 cri.go:89] found id: ""
	I1003 18:16:06.057330   38063 logs.go:282] 0 containers: []
	W1003 18:16:06.057340   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:06.057347   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:06.057396   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:06.082843   38063 cri.go:89] found id: ""
	I1003 18:16:06.082859   38063 logs.go:282] 0 containers: []
	W1003 18:16:06.082865   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:06.082870   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:06.082921   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:06.106237   38063 cri.go:89] found id: ""
	I1003 18:16:06.106251   38063 logs.go:282] 0 containers: []
	W1003 18:16:06.106257   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:06.106264   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:06.106276   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:06.175390   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:06.175407   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:06.186550   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:06.186565   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:06.239490   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:06.233165    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:06.233624    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:06.235128    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:06.235537    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:06.237048    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:06.233165    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:06.233624    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:06.235128    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:06.235537    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:06.237048    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:06.239500   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:06.239513   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:06.301454   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:06.301474   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:08.830757   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:08.841156   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:08.841199   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:08.865562   38063 cri.go:89] found id: ""
	I1003 18:16:08.865578   38063 logs.go:282] 0 containers: []
	W1003 18:16:08.865584   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:08.865589   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:08.865636   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:08.889510   38063 cri.go:89] found id: ""
	I1003 18:16:08.889527   38063 logs.go:282] 0 containers: []
	W1003 18:16:08.889536   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:08.889543   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:08.889588   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:08.914125   38063 cri.go:89] found id: ""
	I1003 18:16:08.914140   38063 logs.go:282] 0 containers: []
	W1003 18:16:08.914146   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:08.914150   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:08.914195   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:08.937681   38063 cri.go:89] found id: ""
	I1003 18:16:08.937697   38063 logs.go:282] 0 containers: []
	W1003 18:16:08.937706   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:08.937711   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:08.937752   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:08.961970   38063 cri.go:89] found id: ""
	I1003 18:16:08.961998   38063 logs.go:282] 0 containers: []
	W1003 18:16:08.962006   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:08.962012   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:08.962073   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:08.986853   38063 cri.go:89] found id: ""
	I1003 18:16:08.986870   38063 logs.go:282] 0 containers: []
	W1003 18:16:08.986877   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:08.986883   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:08.986953   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:09.012531   38063 cri.go:89] found id: ""
	I1003 18:16:09.012547   38063 logs.go:282] 0 containers: []
	W1003 18:16:09.012555   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:09.012570   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:09.012581   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:09.078036   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:09.078053   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:09.088904   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:09.088918   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:09.143252   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:09.136367    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:09.136907    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:09.138514    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:09.139001    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:09.140648    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:09.136367    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:09.136907    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:09.138514    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:09.139001    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:09.140648    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:09.143263   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:09.143275   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:09.201869   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:09.201887   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:11.730105   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:11.740344   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:11.740384   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:11.765234   38063 cri.go:89] found id: ""
	I1003 18:16:11.765247   38063 logs.go:282] 0 containers: []
	W1003 18:16:11.765256   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:11.765261   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:11.765318   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:11.789130   38063 cri.go:89] found id: ""
	I1003 18:16:11.789143   38063 logs.go:282] 0 containers: []
	W1003 18:16:11.789149   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:11.789154   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:11.789198   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:11.815036   38063 cri.go:89] found id: ""
	I1003 18:16:11.815050   38063 logs.go:282] 0 containers: []
	W1003 18:16:11.815058   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:11.815064   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:11.815113   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:11.839467   38063 cri.go:89] found id: ""
	I1003 18:16:11.839483   38063 logs.go:282] 0 containers: []
	W1003 18:16:11.839490   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:11.839495   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:11.839539   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:11.863864   38063 cri.go:89] found id: ""
	I1003 18:16:11.863893   38063 logs.go:282] 0 containers: []
	W1003 18:16:11.863899   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:11.863904   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:11.863955   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:11.889464   38063 cri.go:89] found id: ""
	I1003 18:16:11.889480   38063 logs.go:282] 0 containers: []
	W1003 18:16:11.889488   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:11.889495   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:11.889535   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:11.912845   38063 cri.go:89] found id: ""
	I1003 18:16:11.912862   38063 logs.go:282] 0 containers: []
	W1003 18:16:11.912870   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:11.912880   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:11.912904   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:11.966773   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:11.959444    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:11.960161    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:11.961014    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:11.962530    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:11.962898    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:11.959444    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:11.960161    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:11.961014    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:11.962530    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:11.962898    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:11.966785   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:11.966795   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:12.025128   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:12.025146   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:12.053945   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:12.053960   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:12.119420   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:12.119438   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:14.631092   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:14.641283   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:14.641330   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:14.665808   38063 cri.go:89] found id: ""
	I1003 18:16:14.665821   38063 logs.go:282] 0 containers: []
	W1003 18:16:14.665827   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:14.665832   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:14.665874   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:14.690191   38063 cri.go:89] found id: ""
	I1003 18:16:14.690204   38063 logs.go:282] 0 containers: []
	W1003 18:16:14.690211   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:14.690216   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:14.690266   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:14.715586   38063 cri.go:89] found id: ""
	I1003 18:16:14.715598   38063 logs.go:282] 0 containers: []
	W1003 18:16:14.715619   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:14.715623   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:14.715677   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:14.740173   38063 cri.go:89] found id: ""
	I1003 18:16:14.740190   38063 logs.go:282] 0 containers: []
	W1003 18:16:14.740198   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:14.740202   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:14.740247   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:14.764574   38063 cri.go:89] found id: ""
	I1003 18:16:14.764589   38063 logs.go:282] 0 containers: []
	W1003 18:16:14.764595   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:14.764599   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:14.764653   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:14.788993   38063 cri.go:89] found id: ""
	I1003 18:16:14.789007   38063 logs.go:282] 0 containers: []
	W1003 18:16:14.789014   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:14.789018   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:14.789059   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:14.813679   38063 cri.go:89] found id: ""
	I1003 18:16:14.813692   38063 logs.go:282] 0 containers: []
	W1003 18:16:14.813699   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:14.813706   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:14.813715   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:14.840363   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:14.840378   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:14.906264   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:14.906280   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:14.917237   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:14.917251   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:14.971230   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:14.964471    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:14.965000    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:14.966522    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:14.966918    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:14.968491    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:14.964471    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:14.965000    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:14.966522    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:14.966918    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:14.968491    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:14.971246   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:14.971257   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:17.534133   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:17.544453   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:17.544502   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:17.568816   38063 cri.go:89] found id: ""
	I1003 18:16:17.568834   38063 logs.go:282] 0 containers: []
	W1003 18:16:17.568841   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:17.568847   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:17.568899   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:17.593442   38063 cri.go:89] found id: ""
	I1003 18:16:17.593460   38063 logs.go:282] 0 containers: []
	W1003 18:16:17.593466   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:17.593472   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:17.593515   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:17.617737   38063 cri.go:89] found id: ""
	I1003 18:16:17.617754   38063 logs.go:282] 0 containers: []
	W1003 18:16:17.617761   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:17.617766   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:17.617804   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:17.642180   38063 cri.go:89] found id: ""
	I1003 18:16:17.642194   38063 logs.go:282] 0 containers: []
	W1003 18:16:17.642201   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:17.642206   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:17.642250   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:17.666189   38063 cri.go:89] found id: ""
	I1003 18:16:17.666204   38063 logs.go:282] 0 containers: []
	W1003 18:16:17.666210   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:17.666214   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:17.666259   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:17.689273   38063 cri.go:89] found id: ""
	I1003 18:16:17.689289   38063 logs.go:282] 0 containers: []
	W1003 18:16:17.689297   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:17.689305   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:17.689345   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:17.714353   38063 cri.go:89] found id: ""
	I1003 18:16:17.714373   38063 logs.go:282] 0 containers: []
	W1003 18:16:17.714381   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:17.714394   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:17.714407   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:17.768746   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:17.762135    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:17.762597    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:17.764136    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:17.764533    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:17.766023    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:17.762135    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:17.762597    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:17.764136    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:17.764533    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:17.766023    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:17.768759   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:17.768768   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:17.830139   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:17.830159   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:17.858326   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:17.858342   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:17.922889   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:17.922911   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:20.435863   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:20.446321   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:20.446361   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:20.471731   38063 cri.go:89] found id: ""
	I1003 18:16:20.471743   38063 logs.go:282] 0 containers: []
	W1003 18:16:20.471749   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:20.471753   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:20.471792   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:20.495730   38063 cri.go:89] found id: ""
	I1003 18:16:20.495747   38063 logs.go:282] 0 containers: []
	W1003 18:16:20.495755   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:20.495760   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:20.495815   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:20.520555   38063 cri.go:89] found id: ""
	I1003 18:16:20.520572   38063 logs.go:282] 0 containers: []
	W1003 18:16:20.520581   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:20.520597   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:20.520650   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:20.545197   38063 cri.go:89] found id: ""
	I1003 18:16:20.545210   38063 logs.go:282] 0 containers: []
	W1003 18:16:20.545216   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:20.545220   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:20.545258   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:20.569113   38063 cri.go:89] found id: ""
	I1003 18:16:20.569126   38063 logs.go:282] 0 containers: []
	W1003 18:16:20.569132   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:20.569138   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:20.569189   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:20.593468   38063 cri.go:89] found id: ""
	I1003 18:16:20.593483   38063 logs.go:282] 0 containers: []
	W1003 18:16:20.593491   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:20.593496   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:20.593545   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:20.617852   38063 cri.go:89] found id: ""
	I1003 18:16:20.617865   38063 logs.go:282] 0 containers: []
	W1003 18:16:20.617872   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:20.617878   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:20.617887   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:20.680360   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:20.680379   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:20.691258   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:20.691271   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:20.745174   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:20.738655    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:20.739179    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:20.740672    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:20.741122    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:20.742610    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:20.738655    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:20.739179    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:20.740672    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:20.741122    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:20.742610    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:20.745187   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:20.745197   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:20.806835   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:20.806853   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:23.335788   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:23.346440   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:23.346505   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:23.371250   38063 cri.go:89] found id: ""
	I1003 18:16:23.371263   38063 logs.go:282] 0 containers: []
	W1003 18:16:23.371269   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:23.371273   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:23.371315   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:23.396570   38063 cri.go:89] found id: ""
	I1003 18:16:23.396585   38063 logs.go:282] 0 containers: []
	W1003 18:16:23.396592   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:23.396596   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:23.396646   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:23.420703   38063 cri.go:89] found id: ""
	I1003 18:16:23.420718   38063 logs.go:282] 0 containers: []
	W1003 18:16:23.420728   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:23.420735   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:23.420783   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:23.445294   38063 cri.go:89] found id: ""
	I1003 18:16:23.445310   38063 logs.go:282] 0 containers: []
	W1003 18:16:23.445319   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:23.445326   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:23.445372   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:23.470082   38063 cri.go:89] found id: ""
	I1003 18:16:23.470100   38063 logs.go:282] 0 containers: []
	W1003 18:16:23.470106   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:23.470110   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:23.470148   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:23.494417   38063 cri.go:89] found id: ""
	I1003 18:16:23.494432   38063 logs.go:282] 0 containers: []
	W1003 18:16:23.494441   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:23.494446   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:23.494489   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:23.519492   38063 cri.go:89] found id: ""
	I1003 18:16:23.519507   38063 logs.go:282] 0 containers: []
	W1003 18:16:23.519516   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:23.519526   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:23.519538   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:23.583328   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:23.583346   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:23.594696   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:23.594710   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:23.649094   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:23.642344    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:23.642882    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:23.644368    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:23.644805    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:23.646275    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:23.642344    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:23.642882    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:23.644368    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:23.644805    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:23.646275    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:23.649104   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:23.649113   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:23.710665   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:23.710684   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:26.239439   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:26.250313   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:26.250355   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:26.275460   38063 cri.go:89] found id: ""
	I1003 18:16:26.275476   38063 logs.go:282] 0 containers: []
	W1003 18:16:26.275484   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:26.275490   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:26.275544   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:26.300685   38063 cri.go:89] found id: ""
	I1003 18:16:26.300701   38063 logs.go:282] 0 containers: []
	W1003 18:16:26.300710   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:26.300716   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:26.300760   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:26.324124   38063 cri.go:89] found id: ""
	I1003 18:16:26.324141   38063 logs.go:282] 0 containers: []
	W1003 18:16:26.324150   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:26.324156   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:26.324203   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:26.349331   38063 cri.go:89] found id: ""
	I1003 18:16:26.349348   38063 logs.go:282] 0 containers: []
	W1003 18:16:26.349357   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:26.349363   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:26.349407   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:26.373924   38063 cri.go:89] found id: ""
	I1003 18:16:26.373938   38063 logs.go:282] 0 containers: []
	W1003 18:16:26.373944   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:26.373948   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:26.374020   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:26.398561   38063 cri.go:89] found id: ""
	I1003 18:16:26.398575   38063 logs.go:282] 0 containers: []
	W1003 18:16:26.398581   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:26.398593   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:26.398637   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:26.423043   38063 cri.go:89] found id: ""
	I1003 18:16:26.423055   38063 logs.go:282] 0 containers: []
	W1003 18:16:26.423064   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:26.423073   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:26.423085   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:26.448940   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:26.448957   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:26.514345   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:26.514362   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:26.525206   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:26.525218   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:26.579573   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:26.572848    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:26.573316    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:26.574821    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:26.575280    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:26.576738    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:26.572848    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:26.573316    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:26.574821    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:26.575280    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:26.576738    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:26.579590   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:26.579599   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:29.139399   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:29.149491   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:29.149546   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:29.174745   38063 cri.go:89] found id: ""
	I1003 18:16:29.174759   38063 logs.go:282] 0 containers: []
	W1003 18:16:29.174764   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:29.174769   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:29.174809   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:29.199728   38063 cri.go:89] found id: ""
	I1003 18:16:29.199741   38063 logs.go:282] 0 containers: []
	W1003 18:16:29.199747   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:29.199752   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:29.199803   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:29.225114   38063 cri.go:89] found id: ""
	I1003 18:16:29.225130   38063 logs.go:282] 0 containers: []
	W1003 18:16:29.225139   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:29.225145   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:29.225208   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:29.249942   38063 cri.go:89] found id: ""
	I1003 18:16:29.249959   38063 logs.go:282] 0 containers: []
	W1003 18:16:29.249968   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:29.249990   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:29.250054   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:29.274658   38063 cri.go:89] found id: ""
	I1003 18:16:29.274676   38063 logs.go:282] 0 containers: []
	W1003 18:16:29.274684   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:29.274690   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:29.274740   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:29.299132   38063 cri.go:89] found id: ""
	I1003 18:16:29.299147   38063 logs.go:282] 0 containers: []
	W1003 18:16:29.299153   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:29.299159   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:29.299207   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:29.323399   38063 cri.go:89] found id: ""
	I1003 18:16:29.323414   38063 logs.go:282] 0 containers: []
	W1003 18:16:29.323420   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:29.323427   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:29.323436   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:29.388896   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:29.388919   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:29.400252   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:29.400267   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:29.453553   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:29.447303    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:29.447746    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:29.449289    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:29.449640    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:29.451133    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:29.447303    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:29.447746    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:29.449289    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:29.449640    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:29.451133    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:29.453604   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:29.453615   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:29.515234   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:29.515257   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:32.045106   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:32.055516   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:32.055563   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:32.081412   38063 cri.go:89] found id: ""
	I1003 18:16:32.081425   38063 logs.go:282] 0 containers: []
	W1003 18:16:32.081431   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:32.081436   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:32.081476   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:32.106569   38063 cri.go:89] found id: ""
	I1003 18:16:32.106585   38063 logs.go:282] 0 containers: []
	W1003 18:16:32.106591   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:32.106595   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:32.106634   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:32.131668   38063 cri.go:89] found id: ""
	I1003 18:16:32.131684   38063 logs.go:282] 0 containers: []
	W1003 18:16:32.131692   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:32.131699   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:32.131745   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:32.156465   38063 cri.go:89] found id: ""
	I1003 18:16:32.156479   38063 logs.go:282] 0 containers: []
	W1003 18:16:32.156485   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:32.156490   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:32.156566   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:32.181247   38063 cri.go:89] found id: ""
	I1003 18:16:32.181260   38063 logs.go:282] 0 containers: []
	W1003 18:16:32.181267   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:32.181271   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:32.181314   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:32.205219   38063 cri.go:89] found id: ""
	I1003 18:16:32.205236   38063 logs.go:282] 0 containers: []
	W1003 18:16:32.205245   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:32.205252   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:32.205305   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:32.229751   38063 cri.go:89] found id: ""
	I1003 18:16:32.229767   38063 logs.go:282] 0 containers: []
	W1003 18:16:32.229776   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:32.229785   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:32.229797   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:32.257251   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:32.257266   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:32.325308   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:32.325326   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:32.336569   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:32.336584   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:32.391680   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:32.384542    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:32.385163    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:32.386741    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:32.387204    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:32.388820    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:32.384542    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:32.385163    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:32.386741    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:32.387204    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:32.388820    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:32.391693   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:32.391706   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:34.954303   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:34.965018   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:34.965070   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:34.990955   38063 cri.go:89] found id: ""
	I1003 18:16:34.990970   38063 logs.go:282] 0 containers: []
	W1003 18:16:34.990992   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:34.990999   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:34.991061   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:35.015676   38063 cri.go:89] found id: ""
	I1003 18:16:35.015689   38063 logs.go:282] 0 containers: []
	W1003 18:16:35.015695   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:35.015699   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:35.015737   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:35.040155   38063 cri.go:89] found id: ""
	I1003 18:16:35.040168   38063 logs.go:282] 0 containers: []
	W1003 18:16:35.040174   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:35.040179   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:35.040218   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:35.065569   38063 cri.go:89] found id: ""
	I1003 18:16:35.065587   38063 logs.go:282] 0 containers: []
	W1003 18:16:35.065596   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:35.065602   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:35.065663   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:35.090276   38063 cri.go:89] found id: ""
	I1003 18:16:35.090288   38063 logs.go:282] 0 containers: []
	W1003 18:16:35.090295   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:35.090299   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:35.090339   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:35.114581   38063 cri.go:89] found id: ""
	I1003 18:16:35.114617   38063 logs.go:282] 0 containers: []
	W1003 18:16:35.114627   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:35.114633   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:35.114688   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:35.139719   38063 cri.go:89] found id: ""
	I1003 18:16:35.139734   38063 logs.go:282] 0 containers: []
	W1003 18:16:35.139744   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:35.139753   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:35.139766   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:35.205015   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:35.205034   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:35.216021   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:35.216039   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:35.269655   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:35.262830    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:35.263341    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:35.264897    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:35.265346    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:35.266885    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:35.262830    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:35.263341    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:35.264897    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:35.265346    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:35.266885    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:35.269664   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:35.269674   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:35.330604   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:35.330634   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:37.861503   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:37.871534   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:37.871641   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:37.895946   38063 cri.go:89] found id: ""
	I1003 18:16:37.895961   38063 logs.go:282] 0 containers: []
	W1003 18:16:37.895971   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:37.895995   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:37.896048   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:37.921286   38063 cri.go:89] found id: ""
	I1003 18:16:37.921301   38063 logs.go:282] 0 containers: []
	W1003 18:16:37.921308   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:37.921314   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:37.921364   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:37.946115   38063 cri.go:89] found id: ""
	I1003 18:16:37.946131   38063 logs.go:282] 0 containers: []
	W1003 18:16:37.946141   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:37.946148   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:37.946194   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:37.970857   38063 cri.go:89] found id: ""
	I1003 18:16:37.970871   38063 logs.go:282] 0 containers: []
	W1003 18:16:37.970878   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:37.970882   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:37.970930   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:37.997387   38063 cri.go:89] found id: ""
	I1003 18:16:37.997405   38063 logs.go:282] 0 containers: []
	W1003 18:16:37.997412   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:37.997416   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:37.997459   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:38.022848   38063 cri.go:89] found id: ""
	I1003 18:16:38.022862   38063 logs.go:282] 0 containers: []
	W1003 18:16:38.022869   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:38.022874   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:38.022938   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:38.048588   38063 cri.go:89] found id: ""
	I1003 18:16:38.048624   38063 logs.go:282] 0 containers: []
	W1003 18:16:38.048632   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:38.048640   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:38.048653   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:38.110031   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:38.110050   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:38.137498   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:38.137513   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:38.203958   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:38.203994   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:38.215727   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:38.215744   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:38.269765   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:38.263066    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:38.263531    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:38.265220    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:38.265597    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:38.267129    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:38.263066    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:38.263531    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:38.265220    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:38.265597    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:38.267129    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:40.770413   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:40.780831   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:40.780874   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:40.804826   38063 cri.go:89] found id: ""
	I1003 18:16:40.804839   38063 logs.go:282] 0 containers: []
	W1003 18:16:40.804845   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:40.804850   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:40.804890   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:40.830833   38063 cri.go:89] found id: ""
	I1003 18:16:40.830850   38063 logs.go:282] 0 containers: []
	W1003 18:16:40.830858   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:40.830864   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:40.830930   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:40.856650   38063 cri.go:89] found id: ""
	I1003 18:16:40.856669   38063 logs.go:282] 0 containers: []
	W1003 18:16:40.856677   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:40.856693   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:40.856748   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:40.881236   38063 cri.go:89] found id: ""
	I1003 18:16:40.881250   38063 logs.go:282] 0 containers: []
	W1003 18:16:40.881256   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:40.881261   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:40.881301   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:40.905820   38063 cri.go:89] found id: ""
	I1003 18:16:40.905836   38063 logs.go:282] 0 containers: []
	W1003 18:16:40.905843   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:40.905849   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:40.905900   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:40.931504   38063 cri.go:89] found id: ""
	I1003 18:16:40.931520   38063 logs.go:282] 0 containers: []
	W1003 18:16:40.931527   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:40.931532   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:40.931583   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:40.957539   38063 cri.go:89] found id: ""
	I1003 18:16:40.957553   38063 logs.go:282] 0 containers: []
	W1003 18:16:40.957560   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:40.957567   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:40.957578   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:41.015948   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:41.015969   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:41.044701   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:41.044726   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:41.112388   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:41.112406   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:41.123384   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:41.123399   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:41.177789   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:41.171080    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:41.171701    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:41.173280    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:41.173749    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:41.175246    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:41.171080    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:41.171701    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:41.173280    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:41.173749    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:41.175246    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:43.679496   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:43.689800   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:43.689843   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:43.714130   38063 cri.go:89] found id: ""
	I1003 18:16:43.714145   38063 logs.go:282] 0 containers: []
	W1003 18:16:43.714152   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:43.714156   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:43.714197   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:43.738900   38063 cri.go:89] found id: ""
	I1003 18:16:43.738916   38063 logs.go:282] 0 containers: []
	W1003 18:16:43.738924   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:43.738929   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:43.738972   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:43.763822   38063 cri.go:89] found id: ""
	I1003 18:16:43.763835   38063 logs.go:282] 0 containers: []
	W1003 18:16:43.763841   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:43.763845   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:43.763884   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:43.789103   38063 cri.go:89] found id: ""
	I1003 18:16:43.789120   38063 logs.go:282] 0 containers: []
	W1003 18:16:43.789128   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:43.789134   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:43.789187   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:43.813436   38063 cri.go:89] found id: ""
	I1003 18:16:43.813447   38063 logs.go:282] 0 containers: []
	W1003 18:16:43.813455   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:43.813460   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:43.813513   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:43.838306   38063 cri.go:89] found id: ""
	I1003 18:16:43.838322   38063 logs.go:282] 0 containers: []
	W1003 18:16:43.838331   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:43.838338   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:43.838382   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:43.863413   38063 cri.go:89] found id: ""
	I1003 18:16:43.863429   38063 logs.go:282] 0 containers: []
	W1003 18:16:43.863435   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:43.863442   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:43.863451   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:43.931299   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:43.931317   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:43.942307   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:43.942321   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:43.997476   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:43.990626    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:43.991191    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:43.992711    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:43.993154    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:43.994633    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:43.990626    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:43.991191    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:43.992711    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:43.993154    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:43.994633    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:43.997488   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:43.997500   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:44.053446   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:44.053464   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:46.583423   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:46.593663   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:46.593719   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:46.618188   38063 cri.go:89] found id: ""
	I1003 18:16:46.618202   38063 logs.go:282] 0 containers: []
	W1003 18:16:46.618208   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:46.618213   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:46.618250   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:46.642929   38063 cri.go:89] found id: ""
	I1003 18:16:46.642943   38063 logs.go:282] 0 containers: []
	W1003 18:16:46.642949   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:46.642954   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:46.643015   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:46.667745   38063 cri.go:89] found id: ""
	I1003 18:16:46.667761   38063 logs.go:282] 0 containers: []
	W1003 18:16:46.667770   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:46.667775   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:46.667818   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:46.692080   38063 cri.go:89] found id: ""
	I1003 18:16:46.692092   38063 logs.go:282] 0 containers: []
	W1003 18:16:46.692098   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:46.692102   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:46.692140   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:46.716789   38063 cri.go:89] found id: ""
	I1003 18:16:46.716807   38063 logs.go:282] 0 containers: []
	W1003 18:16:46.716816   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:46.716822   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:46.716867   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:46.741361   38063 cri.go:89] found id: ""
	I1003 18:16:46.741375   38063 logs.go:282] 0 containers: []
	W1003 18:16:46.741382   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:46.741389   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:46.741437   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:46.765330   38063 cri.go:89] found id: ""
	I1003 18:16:46.765343   38063 logs.go:282] 0 containers: []
	W1003 18:16:46.765349   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:46.765357   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:46.765368   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:46.830366   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:46.830385   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:46.841266   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:46.841279   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:46.894396   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:46.888072    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:46.888542    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:46.890079    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:46.890459    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:46.891950    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:46.888072    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:46.888542    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:46.890079    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:46.890459    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:46.891950    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:46.894415   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:46.894426   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:46.954277   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:46.954295   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:49.482413   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:49.492881   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:49.492921   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:49.516075   38063 cri.go:89] found id: ""
	I1003 18:16:49.516093   38063 logs.go:282] 0 containers: []
	W1003 18:16:49.516102   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:49.516108   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:49.516154   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:49.542911   38063 cri.go:89] found id: ""
	I1003 18:16:49.542928   38063 logs.go:282] 0 containers: []
	W1003 18:16:49.542936   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:49.542940   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:49.543006   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:49.568965   38063 cri.go:89] found id: ""
	I1003 18:16:49.568996   38063 logs.go:282] 0 containers: []
	W1003 18:16:49.569005   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:49.569009   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:49.569055   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:49.593221   38063 cri.go:89] found id: ""
	I1003 18:16:49.593238   38063 logs.go:282] 0 containers: []
	W1003 18:16:49.593246   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:49.593251   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:49.593302   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:49.618807   38063 cri.go:89] found id: ""
	I1003 18:16:49.618824   38063 logs.go:282] 0 containers: []
	W1003 18:16:49.618831   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:49.618848   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:49.618893   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:49.642342   38063 cri.go:89] found id: ""
	I1003 18:16:49.642357   38063 logs.go:282] 0 containers: []
	W1003 18:16:49.642363   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:49.642368   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:49.642407   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:49.666474   38063 cri.go:89] found id: ""
	I1003 18:16:49.666488   38063 logs.go:282] 0 containers: []
	W1003 18:16:49.666494   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:49.666502   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:49.666513   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:49.722457   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:49.722476   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:49.750153   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:49.750170   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:49.814369   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:49.814387   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:49.825405   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:49.825418   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:49.879924   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:49.873380    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:49.873871    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:49.875556    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:49.876003    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:49.877459    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:49.873380    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:49.873871    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:49.875556    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:49.876003    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:49.877459    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:52.380662   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:52.391022   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:52.391066   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:52.414399   38063 cri.go:89] found id: ""
	I1003 18:16:52.414416   38063 logs.go:282] 0 containers: []
	W1003 18:16:52.414423   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:52.414428   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:52.414466   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:52.438285   38063 cri.go:89] found id: ""
	I1003 18:16:52.438301   38063 logs.go:282] 0 containers: []
	W1003 18:16:52.438308   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:52.438312   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:52.438352   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:52.463204   38063 cri.go:89] found id: ""
	I1003 18:16:52.463218   38063 logs.go:282] 0 containers: []
	W1003 18:16:52.463224   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:52.463229   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:52.463271   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:52.487579   38063 cri.go:89] found id: ""
	I1003 18:16:52.487593   38063 logs.go:282] 0 containers: []
	W1003 18:16:52.487598   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:52.487605   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:52.487658   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:52.512643   38063 cri.go:89] found id: ""
	I1003 18:16:52.512657   38063 logs.go:282] 0 containers: []
	W1003 18:16:52.512663   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:52.512667   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:52.512705   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:52.538897   38063 cri.go:89] found id: ""
	I1003 18:16:52.538913   38063 logs.go:282] 0 containers: []
	W1003 18:16:52.538920   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:52.538926   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:52.538970   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:52.563277   38063 cri.go:89] found id: ""
	I1003 18:16:52.563294   38063 logs.go:282] 0 containers: []
	W1003 18:16:52.563302   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:52.563310   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:52.563321   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:52.622624   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:52.622642   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:52.650058   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:52.650074   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:52.714242   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:52.714261   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:52.725305   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:52.725319   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:52.777801   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:52.771320   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:52.772111   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:52.773166   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:52.773579   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:52.775090   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:52.771320   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:52.772111   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:52.773166   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:52.773579   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:52.775090   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:55.279440   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:55.290117   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:55.290161   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:55.315904   38063 cri.go:89] found id: ""
	I1003 18:16:55.315920   38063 logs.go:282] 0 containers: []
	W1003 18:16:55.315926   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:55.315930   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:55.315996   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:55.340568   38063 cri.go:89] found id: ""
	I1003 18:16:55.340582   38063 logs.go:282] 0 containers: []
	W1003 18:16:55.340588   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:55.340593   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:55.340631   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:55.365911   38063 cri.go:89] found id: ""
	I1003 18:16:55.365927   38063 logs.go:282] 0 containers: []
	W1003 18:16:55.365937   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:55.365943   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:55.366003   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:55.390838   38063 cri.go:89] found id: ""
	I1003 18:16:55.390855   38063 logs.go:282] 0 containers: []
	W1003 18:16:55.390864   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:55.390870   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:55.390924   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:55.414625   38063 cri.go:89] found id: ""
	I1003 18:16:55.414638   38063 logs.go:282] 0 containers: []
	W1003 18:16:55.414651   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:55.414657   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:55.414712   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:55.438460   38063 cri.go:89] found id: ""
	I1003 18:16:55.438474   38063 logs.go:282] 0 containers: []
	W1003 18:16:55.438480   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:55.438484   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:55.438522   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:55.463131   38063 cri.go:89] found id: ""
	I1003 18:16:55.463148   38063 logs.go:282] 0 containers: []
	W1003 18:16:55.463156   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:55.463165   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:55.463176   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:55.516949   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:55.510276   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:55.510824   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:55.512379   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:55.512767   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:55.514262   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:55.510276   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:55.510824   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:55.512379   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:55.512767   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:55.514262   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:55.516958   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:55.516968   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:55.573992   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:55.574010   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:55.601928   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:55.601944   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:55.667452   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:55.667470   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:58.180268   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:58.190896   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:58.190942   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:58.215802   38063 cri.go:89] found id: ""
	I1003 18:16:58.215820   38063 logs.go:282] 0 containers: []
	W1003 18:16:58.215828   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:58.215835   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:58.215885   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:58.240607   38063 cri.go:89] found id: ""
	I1003 18:16:58.240623   38063 logs.go:282] 0 containers: []
	W1003 18:16:58.240632   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:58.240638   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:58.240719   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:58.264676   38063 cri.go:89] found id: ""
	I1003 18:16:58.264689   38063 logs.go:282] 0 containers: []
	W1003 18:16:58.264696   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:58.264703   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:58.264742   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:58.289482   38063 cri.go:89] found id: ""
	I1003 18:16:58.289496   38063 logs.go:282] 0 containers: []
	W1003 18:16:58.289502   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:58.289507   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:58.289558   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:58.314683   38063 cri.go:89] found id: ""
	I1003 18:16:58.314699   38063 logs.go:282] 0 containers: []
	W1003 18:16:58.314708   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:58.314714   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:58.314763   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:58.340874   38063 cri.go:89] found id: ""
	I1003 18:16:58.340900   38063 logs.go:282] 0 containers: []
	W1003 18:16:58.340910   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:58.340918   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:58.340989   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:58.365744   38063 cri.go:89] found id: ""
	I1003 18:16:58.365765   38063 logs.go:282] 0 containers: []
	W1003 18:16:58.365774   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:58.365785   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:58.365798   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:58.424919   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:58.424938   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:58.452107   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:58.452122   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:58.516078   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:58.516098   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:58.527186   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:58.527200   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:58.581397   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:58.574853   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:58.575363   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:58.576868   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:58.577319   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:58.578848   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:58.574853   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:58.575363   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:58.576868   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:58.577319   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:58.578848   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:01.083146   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:01.093268   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:01.093310   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:01.117816   38063 cri.go:89] found id: ""
	I1003 18:17:01.117833   38063 logs.go:282] 0 containers: []
	W1003 18:17:01.117840   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:01.117844   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:01.117882   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:01.141987   38063 cri.go:89] found id: ""
	I1003 18:17:01.142004   38063 logs.go:282] 0 containers: []
	W1003 18:17:01.142012   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:01.142018   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:01.142057   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:01.165255   38063 cri.go:89] found id: ""
	I1003 18:17:01.165271   38063 logs.go:282] 0 containers: []
	W1003 18:17:01.165277   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:01.165282   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:01.165323   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:01.189244   38063 cri.go:89] found id: ""
	I1003 18:17:01.189257   38063 logs.go:282] 0 containers: []
	W1003 18:17:01.189264   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:01.189269   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:01.189310   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:01.213365   38063 cri.go:89] found id: ""
	I1003 18:17:01.213381   38063 logs.go:282] 0 containers: []
	W1003 18:17:01.213388   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:01.213395   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:01.213442   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:01.240957   38063 cri.go:89] found id: ""
	I1003 18:17:01.240972   38063 logs.go:282] 0 containers: []
	W1003 18:17:01.241000   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:01.241007   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:01.241051   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:01.267290   38063 cri.go:89] found id: ""
	I1003 18:17:01.267306   38063 logs.go:282] 0 containers: []
	W1003 18:17:01.267312   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:01.267320   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:01.267331   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:01.295273   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:01.295290   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:01.364816   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:01.364836   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:01.376420   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:01.376437   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:01.432587   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:01.425391   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:01.425950   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:01.427491   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:01.428036   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:01.429594   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:01.425391   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:01.425950   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:01.427491   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:01.428036   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:01.429594   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:01.432599   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:01.432613   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:03.992551   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:04.002736   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:04.002789   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:04.027153   38063 cri.go:89] found id: ""
	I1003 18:17:04.027169   38063 logs.go:282] 0 containers: []
	W1003 18:17:04.027177   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:04.027183   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:04.027240   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:04.052384   38063 cri.go:89] found id: ""
	I1003 18:17:04.052399   38063 logs.go:282] 0 containers: []
	W1003 18:17:04.052406   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:04.052411   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:04.052458   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:04.077210   38063 cri.go:89] found id: ""
	I1003 18:17:04.077225   38063 logs.go:282] 0 containers: []
	W1003 18:17:04.077233   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:04.077243   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:04.077298   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:04.102192   38063 cri.go:89] found id: ""
	I1003 18:17:04.102208   38063 logs.go:282] 0 containers: []
	W1003 18:17:04.102217   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:04.102223   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:04.102266   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:04.126632   38063 cri.go:89] found id: ""
	I1003 18:17:04.126647   38063 logs.go:282] 0 containers: []
	W1003 18:17:04.126653   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:04.126658   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:04.126700   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:04.152736   38063 cri.go:89] found id: ""
	I1003 18:17:04.152752   38063 logs.go:282] 0 containers: []
	W1003 18:17:04.152761   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:04.152768   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:04.152814   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:04.177062   38063 cri.go:89] found id: ""
	I1003 18:17:04.177080   38063 logs.go:282] 0 containers: []
	W1003 18:17:04.177089   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:04.177099   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:04.177112   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:04.188211   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:04.188225   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:04.242641   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:04.235414   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:04.235943   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:04.237902   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:04.238634   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:04.240168   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:04.235414   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:04.235943   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:04.237902   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:04.238634   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:04.240168   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:04.242649   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:04.242661   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:04.302342   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:04.302368   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:04.330691   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:04.330717   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:06.899448   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:06.909768   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:06.909813   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:06.934090   38063 cri.go:89] found id: ""
	I1003 18:17:06.934103   38063 logs.go:282] 0 containers: []
	W1003 18:17:06.934109   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:06.934114   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:06.934152   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:06.958320   38063 cri.go:89] found id: ""
	I1003 18:17:06.958334   38063 logs.go:282] 0 containers: []
	W1003 18:17:06.958340   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:06.958343   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:06.958381   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:06.984766   38063 cri.go:89] found id: ""
	I1003 18:17:06.984783   38063 logs.go:282] 0 containers: []
	W1003 18:17:06.984792   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:06.984797   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:06.984857   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:07.011801   38063 cri.go:89] found id: ""
	I1003 18:17:07.011818   38063 logs.go:282] 0 containers: []
	W1003 18:17:07.011827   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:07.011832   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:07.011871   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:07.036323   38063 cri.go:89] found id: ""
	I1003 18:17:07.036339   38063 logs.go:282] 0 containers: []
	W1003 18:17:07.036347   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:07.036352   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:07.036402   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:07.061101   38063 cri.go:89] found id: ""
	I1003 18:17:07.061117   38063 logs.go:282] 0 containers: []
	W1003 18:17:07.061126   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:07.061134   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:07.061184   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:07.085274   38063 cri.go:89] found id: ""
	I1003 18:17:07.085286   38063 logs.go:282] 0 containers: []
	W1003 18:17:07.085293   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:07.085300   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:07.085309   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:07.146317   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:07.146334   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:07.175088   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:07.175102   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:07.243716   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:07.243735   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:07.255174   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:07.255190   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:07.308657   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:07.302083   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:07.302582   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:07.304157   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:07.304555   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:07.306037   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:07.302083   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:07.302582   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:07.304157   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:07.304555   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:07.306037   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:09.809372   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:09.819499   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:09.819542   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:09.844409   38063 cri.go:89] found id: ""
	I1003 18:17:09.844423   38063 logs.go:282] 0 containers: []
	W1003 18:17:09.844435   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:09.844439   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:09.844478   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:09.868767   38063 cri.go:89] found id: ""
	I1003 18:17:09.868781   38063 logs.go:282] 0 containers: []
	W1003 18:17:09.868787   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:09.868791   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:09.868832   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:09.891798   38063 cri.go:89] found id: ""
	I1003 18:17:09.891810   38063 logs.go:282] 0 containers: []
	W1003 18:17:09.891817   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:09.891821   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:09.891858   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:09.917378   38063 cri.go:89] found id: ""
	I1003 18:17:09.917393   38063 logs.go:282] 0 containers: []
	W1003 18:17:09.917399   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:09.917405   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:09.917450   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:09.942686   38063 cri.go:89] found id: ""
	I1003 18:17:09.942699   38063 logs.go:282] 0 containers: []
	W1003 18:17:09.942705   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:09.942710   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:09.942750   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:09.966104   38063 cri.go:89] found id: ""
	I1003 18:17:09.966117   38063 logs.go:282] 0 containers: []
	W1003 18:17:09.966123   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:09.966128   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:09.966166   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:09.993525   38063 cri.go:89] found id: ""
	I1003 18:17:09.993538   38063 logs.go:282] 0 containers: []
	W1003 18:17:09.993544   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:09.993551   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:09.993560   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:10.062246   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:10.062265   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:10.074081   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:10.074098   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:10.128788   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:10.122249   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:10.122773   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:10.124287   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:10.124702   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:10.126163   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:10.122249   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:10.122773   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:10.124287   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:10.124702   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:10.126163   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:10.128809   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:10.128820   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:10.186632   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:10.186649   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:12.716320   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:12.726641   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:12.726693   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:12.750384   38063 cri.go:89] found id: ""
	I1003 18:17:12.750397   38063 logs.go:282] 0 containers: []
	W1003 18:17:12.750403   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:12.750407   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:12.750446   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:12.775313   38063 cri.go:89] found id: ""
	I1003 18:17:12.775330   38063 logs.go:282] 0 containers: []
	W1003 18:17:12.775338   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:12.775344   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:12.775384   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:12.800228   38063 cri.go:89] found id: ""
	I1003 18:17:12.800244   38063 logs.go:282] 0 containers: []
	W1003 18:17:12.800251   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:12.800256   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:12.800298   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:12.825275   38063 cri.go:89] found id: ""
	I1003 18:17:12.825291   38063 logs.go:282] 0 containers: []
	W1003 18:17:12.825300   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:12.825317   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:12.825372   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:12.849255   38063 cri.go:89] found id: ""
	I1003 18:17:12.849271   38063 logs.go:282] 0 containers: []
	W1003 18:17:12.849279   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:12.849285   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:12.849336   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:12.873407   38063 cri.go:89] found id: ""
	I1003 18:17:12.873421   38063 logs.go:282] 0 containers: []
	W1003 18:17:12.873427   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:12.873431   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:12.873482   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:12.896762   38063 cri.go:89] found id: ""
	I1003 18:17:12.896778   38063 logs.go:282] 0 containers: []
	W1003 18:17:12.896786   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:12.896795   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:12.896807   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:12.960955   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:12.960983   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:12.972163   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:12.972178   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:13.025479   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:13.018959   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:13.019441   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:13.020904   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:13.021379   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:13.022868   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:13.018959   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:13.019441   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:13.020904   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:13.021379   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:13.022868   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:13.025493   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:13.025506   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:13.086473   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:13.086491   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:15.616095   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:15.626385   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:15.626428   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:15.650771   38063 cri.go:89] found id: ""
	I1003 18:17:15.650785   38063 logs.go:282] 0 containers: []
	W1003 18:17:15.650792   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:15.650796   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:15.650837   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:15.675587   38063 cri.go:89] found id: ""
	I1003 18:17:15.675629   38063 logs.go:282] 0 containers: []
	W1003 18:17:15.675637   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:15.675643   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:15.675705   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:15.699653   38063 cri.go:89] found id: ""
	I1003 18:17:15.699667   38063 logs.go:282] 0 containers: []
	W1003 18:17:15.699673   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:15.699677   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:15.699716   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:15.724414   38063 cri.go:89] found id: ""
	I1003 18:17:15.724427   38063 logs.go:282] 0 containers: []
	W1003 18:17:15.724435   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:15.724441   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:15.724496   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:15.749056   38063 cri.go:89] found id: ""
	I1003 18:17:15.749069   38063 logs.go:282] 0 containers: []
	W1003 18:17:15.749077   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:15.749082   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:15.749123   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:15.773830   38063 cri.go:89] found id: ""
	I1003 18:17:15.773846   38063 logs.go:282] 0 containers: []
	W1003 18:17:15.773859   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:15.773864   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:15.773907   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:15.798104   38063 cri.go:89] found id: ""
	I1003 18:17:15.798120   38063 logs.go:282] 0 containers: []
	W1003 18:17:15.798126   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:15.798133   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:15.798143   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:15.851960   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:15.845372   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:15.845936   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:15.847479   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:15.847794   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:15.849288   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:15.845372   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:15.845936   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:15.847479   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:15.847794   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:15.849288   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:15.851990   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:15.852005   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:15.909042   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:15.909059   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:15.936198   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:15.936212   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:16.001546   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:16.001563   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:18.514268   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:18.524824   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:18.524867   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:18.549240   38063 cri.go:89] found id: ""
	I1003 18:17:18.549252   38063 logs.go:282] 0 containers: []
	W1003 18:17:18.549259   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:18.549263   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:18.549304   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:18.573832   38063 cri.go:89] found id: ""
	I1003 18:17:18.573846   38063 logs.go:282] 0 containers: []
	W1003 18:17:18.573851   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:18.573855   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:18.573893   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:18.600015   38063 cri.go:89] found id: ""
	I1003 18:17:18.600030   38063 logs.go:282] 0 containers: []
	W1003 18:17:18.600038   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:18.600042   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:18.600092   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:18.624175   38063 cri.go:89] found id: ""
	I1003 18:17:18.624187   38063 logs.go:282] 0 containers: []
	W1003 18:17:18.624193   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:18.624197   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:18.624235   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:18.647489   38063 cri.go:89] found id: ""
	I1003 18:17:18.647506   38063 logs.go:282] 0 containers: []
	W1003 18:17:18.647515   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:18.647521   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:18.647563   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:18.671643   38063 cri.go:89] found id: ""
	I1003 18:17:18.671657   38063 logs.go:282] 0 containers: []
	W1003 18:17:18.671663   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:18.671668   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:18.671706   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:18.696078   38063 cri.go:89] found id: ""
	I1003 18:17:18.696092   38063 logs.go:282] 0 containers: []
	W1003 18:17:18.696098   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:18.696105   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:18.696121   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:18.753226   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:18.753245   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:18.780990   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:18.781068   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:18.847947   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:18.847966   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:18.859021   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:18.859037   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:18.912345   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:18.905516   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:18.906367   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:18.907929   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:18.908373   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:18.909849   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:18.905516   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:18.906367   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:18.907929   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:18.908373   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:18.909849   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:21.414030   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:21.425003   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:21.425051   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:21.450060   38063 cri.go:89] found id: ""
	I1003 18:17:21.450073   38063 logs.go:282] 0 containers: []
	W1003 18:17:21.450080   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:21.450085   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:21.450124   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:21.474474   38063 cri.go:89] found id: ""
	I1003 18:17:21.474488   38063 logs.go:282] 0 containers: []
	W1003 18:17:21.474494   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:21.474499   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:21.474539   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:21.498126   38063 cri.go:89] found id: ""
	I1003 18:17:21.498142   38063 logs.go:282] 0 containers: []
	W1003 18:17:21.498149   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:21.498154   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:21.498203   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:21.523905   38063 cri.go:89] found id: ""
	I1003 18:17:21.523923   38063 logs.go:282] 0 containers: []
	W1003 18:17:21.523932   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:21.523938   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:21.524008   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:21.548187   38063 cri.go:89] found id: ""
	I1003 18:17:21.548201   38063 logs.go:282] 0 containers: []
	W1003 18:17:21.548207   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:21.548211   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:21.548252   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:21.572667   38063 cri.go:89] found id: ""
	I1003 18:17:21.572680   38063 logs.go:282] 0 containers: []
	W1003 18:17:21.572686   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:21.572692   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:21.572736   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:21.597807   38063 cri.go:89] found id: ""
	I1003 18:17:21.597824   38063 logs.go:282] 0 containers: []
	W1003 18:17:21.597832   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:21.597839   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:21.597848   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:21.652152   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:21.645230   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:21.645729   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:21.647282   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:21.647701   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:21.649188   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:21.645230   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:21.645729   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:21.647282   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:21.647701   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:21.649188   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:21.652166   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:21.652179   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:21.713448   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:21.713465   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:21.742437   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:21.742451   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:21.805537   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:21.805554   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:24.317361   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:24.327608   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:24.327671   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:24.354286   38063 cri.go:89] found id: ""
	I1003 18:17:24.354305   38063 logs.go:282] 0 containers: []
	W1003 18:17:24.354315   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:24.354320   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:24.354379   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:24.378696   38063 cri.go:89] found id: ""
	I1003 18:17:24.378710   38063 logs.go:282] 0 containers: []
	W1003 18:17:24.378718   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:24.378724   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:24.378782   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:24.402575   38063 cri.go:89] found id: ""
	I1003 18:17:24.402589   38063 logs.go:282] 0 containers: []
	W1003 18:17:24.402595   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:24.402600   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:24.402648   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:24.427138   38063 cri.go:89] found id: ""
	I1003 18:17:24.427154   38063 logs.go:282] 0 containers: []
	W1003 18:17:24.427162   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:24.427169   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:24.427211   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:24.451521   38063 cri.go:89] found id: ""
	I1003 18:17:24.451536   38063 logs.go:282] 0 containers: []
	W1003 18:17:24.451543   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:24.451547   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:24.451590   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:24.475930   38063 cri.go:89] found id: ""
	I1003 18:17:24.475943   38063 logs.go:282] 0 containers: []
	W1003 18:17:24.475949   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:24.475954   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:24.476012   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:24.500074   38063 cri.go:89] found id: ""
	I1003 18:17:24.500087   38063 logs.go:282] 0 containers: []
	W1003 18:17:24.500093   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:24.500100   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:24.500109   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:24.566537   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:24.566553   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:24.577539   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:24.577553   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:24.632738   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:24.626123   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:24.626592   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:24.628151   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:24.628571   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:24.630095   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:24.626123   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:24.626592   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:24.628151   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:24.628571   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:24.630095   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:24.632749   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:24.632758   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:24.690610   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:24.690628   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:27.219340   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:27.229548   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:27.229602   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:27.253625   38063 cri.go:89] found id: ""
	I1003 18:17:27.253647   38063 logs.go:282] 0 containers: []
	W1003 18:17:27.253655   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:27.253661   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:27.253712   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:27.277732   38063 cri.go:89] found id: ""
	I1003 18:17:27.277747   38063 logs.go:282] 0 containers: []
	W1003 18:17:27.277756   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:27.277762   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:27.277804   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:27.301627   38063 cri.go:89] found id: ""
	I1003 18:17:27.301641   38063 logs.go:282] 0 containers: []
	W1003 18:17:27.301647   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:27.301652   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:27.301701   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:27.327361   38063 cri.go:89] found id: ""
	I1003 18:17:27.327377   38063 logs.go:282] 0 containers: []
	W1003 18:17:27.327386   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:27.327392   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:27.327455   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:27.351272   38063 cri.go:89] found id: ""
	I1003 18:17:27.351287   38063 logs.go:282] 0 containers: []
	W1003 18:17:27.351296   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:27.351301   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:27.351354   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:27.376015   38063 cri.go:89] found id: ""
	I1003 18:17:27.376028   38063 logs.go:282] 0 containers: []
	W1003 18:17:27.376034   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:27.376039   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:27.376078   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:27.401069   38063 cri.go:89] found id: ""
	I1003 18:17:27.401083   38063 logs.go:282] 0 containers: []
	W1003 18:17:27.401089   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:27.401096   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:27.401106   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:27.461887   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:27.461903   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:27.489794   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:27.489811   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:27.556416   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:27.556437   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:27.567650   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:27.567666   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:27.621254   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:27.614343   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:27.615016   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:27.616631   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:27.617100   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:27.618643   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:27.614343   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:27.615016   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:27.616631   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:27.617100   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:27.618643   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:30.121948   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:30.132195   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:30.132251   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:30.157028   38063 cri.go:89] found id: ""
	I1003 18:17:30.157044   38063 logs.go:282] 0 containers: []
	W1003 18:17:30.157052   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:30.157059   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:30.157114   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:30.181243   38063 cri.go:89] found id: ""
	I1003 18:17:30.181257   38063 logs.go:282] 0 containers: []
	W1003 18:17:30.181267   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:30.181272   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:30.181327   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:30.204956   38063 cri.go:89] found id: ""
	I1003 18:17:30.204969   38063 logs.go:282] 0 containers: []
	W1003 18:17:30.204990   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:30.204996   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:30.205049   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:30.229309   38063 cri.go:89] found id: ""
	I1003 18:17:30.229324   38063 logs.go:282] 0 containers: []
	W1003 18:17:30.229332   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:30.229353   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:30.229404   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:30.253288   38063 cri.go:89] found id: ""
	I1003 18:17:30.253302   38063 logs.go:282] 0 containers: []
	W1003 18:17:30.253308   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:30.253312   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:30.253353   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:30.276885   38063 cri.go:89] found id: ""
	I1003 18:17:30.276900   38063 logs.go:282] 0 containers: []
	W1003 18:17:30.276907   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:30.276912   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:30.276954   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:30.302076   38063 cri.go:89] found id: ""
	I1003 18:17:30.302093   38063 logs.go:282] 0 containers: []
	W1003 18:17:30.302102   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:30.302111   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:30.302122   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:30.355957   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:30.349507   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:30.350118   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:30.351635   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:30.351999   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:30.353476   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:30.349507   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:30.350118   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:30.351635   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:30.351999   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:30.353476   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:30.355967   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:30.355997   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:30.416595   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:30.416617   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:30.444417   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:30.444433   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:30.511869   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:30.511888   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:33.023698   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:33.034090   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:33.034130   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:33.058440   38063 cri.go:89] found id: ""
	I1003 18:17:33.058454   38063 logs.go:282] 0 containers: []
	W1003 18:17:33.058463   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:33.058469   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:33.058516   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:33.083214   38063 cri.go:89] found id: ""
	I1003 18:17:33.083227   38063 logs.go:282] 0 containers: []
	W1003 18:17:33.083233   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:33.083238   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:33.083278   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:33.107106   38063 cri.go:89] found id: ""
	I1003 18:17:33.107121   38063 logs.go:282] 0 containers: []
	W1003 18:17:33.107128   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:33.107132   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:33.107177   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:33.132152   38063 cri.go:89] found id: ""
	I1003 18:17:33.132169   38063 logs.go:282] 0 containers: []
	W1003 18:17:33.132178   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:33.132184   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:33.132237   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:33.156458   38063 cri.go:89] found id: ""
	I1003 18:17:33.156475   38063 logs.go:282] 0 containers: []
	W1003 18:17:33.156486   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:33.156492   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:33.156541   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:33.181450   38063 cri.go:89] found id: ""
	I1003 18:17:33.181466   38063 logs.go:282] 0 containers: []
	W1003 18:17:33.181474   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:33.181480   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:33.181520   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:33.204281   38063 cri.go:89] found id: ""
	I1003 18:17:33.204299   38063 logs.go:282] 0 containers: []
	W1003 18:17:33.204307   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:33.204316   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:33.204328   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:33.268843   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:33.268862   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:33.280428   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:33.280444   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:33.333875   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:33.327300   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:33.327741   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:33.329337   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:33.329778   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:33.331336   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:33.327300   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:33.327741   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:33.329337   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:33.329778   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:33.331336   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:33.333888   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:33.333899   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:33.395285   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:33.395303   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:35.924723   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:35.935417   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:35.935459   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:35.959423   38063 cri.go:89] found id: ""
	I1003 18:17:35.959437   38063 logs.go:282] 0 containers: []
	W1003 18:17:35.959444   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:35.959448   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:35.959497   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:35.984930   38063 cri.go:89] found id: ""
	I1003 18:17:35.984943   38063 logs.go:282] 0 containers: []
	W1003 18:17:35.984949   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:35.984953   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:35.985011   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:36.010660   38063 cri.go:89] found id: ""
	I1003 18:17:36.010676   38063 logs.go:282] 0 containers: []
	W1003 18:17:36.010685   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:36.010692   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:36.010750   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:36.036836   38063 cri.go:89] found id: ""
	I1003 18:17:36.036851   38063 logs.go:282] 0 containers: []
	W1003 18:17:36.036859   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:36.036865   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:36.036931   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:36.062748   38063 cri.go:89] found id: ""
	I1003 18:17:36.062764   38063 logs.go:282] 0 containers: []
	W1003 18:17:36.062774   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:36.062780   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:36.062832   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:36.088459   38063 cri.go:89] found id: ""
	I1003 18:17:36.088476   38063 logs.go:282] 0 containers: []
	W1003 18:17:36.088485   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:36.088492   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:36.088544   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:36.118150   38063 cri.go:89] found id: ""
	I1003 18:17:36.118166   38063 logs.go:282] 0 containers: []
	W1003 18:17:36.118174   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:36.118183   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:36.118195   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:36.188996   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:36.189016   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:36.201752   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:36.201774   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:36.259714   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:36.253085   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:36.253879   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:36.255461   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:36.255860   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:36.257025   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:36.253085   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:36.253879   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:36.255461   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:36.255860   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:36.257025   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:36.259724   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:36.259734   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:36.319327   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:36.319348   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:38.849084   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:38.860041   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:38.860087   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:38.885371   38063 cri.go:89] found id: ""
	I1003 18:17:38.885387   38063 logs.go:282] 0 containers: []
	W1003 18:17:38.885396   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:38.885403   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:38.885448   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:38.910420   38063 cri.go:89] found id: ""
	I1003 18:17:38.910433   38063 logs.go:282] 0 containers: []
	W1003 18:17:38.910439   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:38.910443   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:38.910492   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:38.935082   38063 cri.go:89] found id: ""
	I1003 18:17:38.935098   38063 logs.go:282] 0 containers: []
	W1003 18:17:38.935113   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:38.935119   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:38.935163   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:38.959589   38063 cri.go:89] found id: ""
	I1003 18:17:38.959605   38063 logs.go:282] 0 containers: []
	W1003 18:17:38.959614   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:38.959620   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:38.959664   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:38.983218   38063 cri.go:89] found id: ""
	I1003 18:17:38.983231   38063 logs.go:282] 0 containers: []
	W1003 18:17:38.983237   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:38.983241   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:38.983283   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:39.007734   38063 cri.go:89] found id: ""
	I1003 18:17:39.007748   38063 logs.go:282] 0 containers: []
	W1003 18:17:39.007754   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:39.007759   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:39.007803   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:39.032274   38063 cri.go:89] found id: ""
	I1003 18:17:39.032288   38063 logs.go:282] 0 containers: []
	W1003 18:17:39.032294   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:39.032301   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:39.032310   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:39.085898   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:39.079359   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:39.079847   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:39.081436   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:39.081830   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:39.083352   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:39.079359   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:39.079847   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:39.081436   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:39.081830   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:39.083352   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:39.085913   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:39.085926   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:39.147336   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:39.147355   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:39.174505   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:39.174520   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:39.236749   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:39.236770   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:41.751919   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:41.762279   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:41.762318   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:41.788348   38063 cri.go:89] found id: ""
	I1003 18:17:41.788364   38063 logs.go:282] 0 containers: []
	W1003 18:17:41.788370   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:41.788375   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:41.788416   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:41.813364   38063 cri.go:89] found id: ""
	I1003 18:17:41.813377   38063 logs.go:282] 0 containers: []
	W1003 18:17:41.813383   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:41.813387   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:41.813428   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:41.838263   38063 cri.go:89] found id: ""
	I1003 18:17:41.838278   38063 logs.go:282] 0 containers: []
	W1003 18:17:41.838286   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:41.838296   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:41.838342   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:41.863852   38063 cri.go:89] found id: ""
	I1003 18:17:41.863866   38063 logs.go:282] 0 containers: []
	W1003 18:17:41.863875   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:41.863880   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:41.863928   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:41.888046   38063 cri.go:89] found id: ""
	I1003 18:17:41.888059   38063 logs.go:282] 0 containers: []
	W1003 18:17:41.888065   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:41.888069   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:41.888123   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:41.912391   38063 cri.go:89] found id: ""
	I1003 18:17:41.912407   38063 logs.go:282] 0 containers: []
	W1003 18:17:41.912414   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:41.912419   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:41.912465   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:41.936635   38063 cri.go:89] found id: ""
	I1003 18:17:41.936652   38063 logs.go:282] 0 containers: []
	W1003 18:17:41.936667   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:41.936673   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:41.936682   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:41.999904   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:41.999923   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:42.010760   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:42.010774   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:42.063379   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:42.056776   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:42.057312   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:42.058864   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:42.059272   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:42.060765   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:42.056776   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:42.057312   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:42.058864   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:42.059272   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:42.060765   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:42.063391   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:42.063403   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:42.120707   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:42.120724   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:44.649184   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:44.659323   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:44.659383   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:44.684688   38063 cri.go:89] found id: ""
	I1003 18:17:44.684705   38063 logs.go:282] 0 containers: []
	W1003 18:17:44.684714   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:44.684720   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:44.684766   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:44.709094   38063 cri.go:89] found id: ""
	I1003 18:17:44.709107   38063 logs.go:282] 0 containers: []
	W1003 18:17:44.709113   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:44.709117   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:44.709155   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:44.733401   38063 cri.go:89] found id: ""
	I1003 18:17:44.733417   38063 logs.go:282] 0 containers: []
	W1003 18:17:44.733426   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:44.733430   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:44.733469   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:44.757753   38063 cri.go:89] found id: ""
	I1003 18:17:44.757772   38063 logs.go:282] 0 containers: []
	W1003 18:17:44.757780   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:44.757786   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:44.757841   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:44.781910   38063 cri.go:89] found id: ""
	I1003 18:17:44.781926   38063 logs.go:282] 0 containers: []
	W1003 18:17:44.781933   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:44.781939   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:44.781995   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:44.805801   38063 cri.go:89] found id: ""
	I1003 18:17:44.805820   38063 logs.go:282] 0 containers: []
	W1003 18:17:44.805829   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:44.805835   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:44.805882   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:44.830172   38063 cri.go:89] found id: ""
	I1003 18:17:44.830187   38063 logs.go:282] 0 containers: []
	W1003 18:17:44.830195   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:44.830204   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:44.830218   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:44.898633   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:44.898651   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:44.909788   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:44.909802   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:44.964112   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:44.957005   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:44.957997   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:44.959562   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:44.960003   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:44.961510   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:44.957005   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:44.957997   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:44.959562   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:44.960003   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:44.961510   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:44.964123   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:44.964137   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:45.022483   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:45.022503   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:47.552208   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:47.562597   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:47.562644   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:47.587653   38063 cri.go:89] found id: ""
	I1003 18:17:47.587666   38063 logs.go:282] 0 containers: []
	W1003 18:17:47.587672   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:47.587676   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:47.587722   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:47.611271   38063 cri.go:89] found id: ""
	I1003 18:17:47.611287   38063 logs.go:282] 0 containers: []
	W1003 18:17:47.611294   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:47.611298   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:47.611344   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:47.635604   38063 cri.go:89] found id: ""
	I1003 18:17:47.635617   38063 logs.go:282] 0 containers: []
	W1003 18:17:47.635625   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:47.635631   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:47.635704   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:47.660903   38063 cri.go:89] found id: ""
	I1003 18:17:47.660926   38063 logs.go:282] 0 containers: []
	W1003 18:17:47.660933   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:47.660938   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:47.661007   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:47.686109   38063 cri.go:89] found id: ""
	I1003 18:17:47.686122   38063 logs.go:282] 0 containers: []
	W1003 18:17:47.686129   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:47.686133   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:47.686172   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:47.710137   38063 cri.go:89] found id: ""
	I1003 18:17:47.710153   38063 logs.go:282] 0 containers: []
	W1003 18:17:47.710161   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:47.710167   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:47.710207   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:47.734797   38063 cri.go:89] found id: ""
	I1003 18:17:47.734817   38063 logs.go:282] 0 containers: []
	W1003 18:17:47.734826   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:47.734835   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:47.734849   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:47.745548   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:47.745565   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:47.799254   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:47.792392   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:47.793029   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:47.794533   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:47.794963   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:47.796403   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:47.792392   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:47.793029   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:47.794533   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:47.794963   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:47.796403   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:47.799265   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:47.799274   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:47.861703   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:47.861720   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:47.888938   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:47.888953   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:50.454766   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:50.465005   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:50.465050   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:50.489074   38063 cri.go:89] found id: ""
	I1003 18:17:50.489087   38063 logs.go:282] 0 containers: []
	W1003 18:17:50.489093   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:50.489098   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:50.489139   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:50.513935   38063 cri.go:89] found id: ""
	I1003 18:17:50.513950   38063 logs.go:282] 0 containers: []
	W1003 18:17:50.513959   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:50.513964   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:50.514027   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:50.539148   38063 cri.go:89] found id: ""
	I1003 18:17:50.539166   38063 logs.go:282] 0 containers: []
	W1003 18:17:50.539173   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:50.539179   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:50.539220   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:50.562923   38063 cri.go:89] found id: ""
	I1003 18:17:50.562944   38063 logs.go:282] 0 containers: []
	W1003 18:17:50.562950   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:50.562959   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:50.563021   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:50.587009   38063 cri.go:89] found id: ""
	I1003 18:17:50.587022   38063 logs.go:282] 0 containers: []
	W1003 18:17:50.587029   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:50.587033   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:50.587081   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:50.611334   38063 cri.go:89] found id: ""
	I1003 18:17:50.611350   38063 logs.go:282] 0 containers: []
	W1003 18:17:50.611356   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:50.611361   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:50.611410   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:50.634818   38063 cri.go:89] found id: ""
	I1003 18:17:50.634832   38063 logs.go:282] 0 containers: []
	W1003 18:17:50.634839   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:50.634846   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:50.634856   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:50.696044   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:50.696061   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:50.722679   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:50.722696   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:50.789104   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:50.789122   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:50.800113   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:50.800126   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:50.853877   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:50.846722   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:50.847312   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:50.848906   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:50.849353   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:50.851079   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:50.846722   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:50.847312   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:50.848906   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:50.849353   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:50.851079   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:53.354772   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:53.365080   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:53.365139   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:53.389900   38063 cri.go:89] found id: ""
	I1003 18:17:53.389913   38063 logs.go:282] 0 containers: []
	W1003 18:17:53.389920   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:53.389930   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:53.389993   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:53.414775   38063 cri.go:89] found id: ""
	I1003 18:17:53.414790   38063 logs.go:282] 0 containers: []
	W1003 18:17:53.414797   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:53.414801   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:53.414847   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:53.439429   38063 cri.go:89] found id: ""
	I1003 18:17:53.439445   38063 logs.go:282] 0 containers: []
	W1003 18:17:53.439454   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:53.439460   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:53.439506   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:53.464200   38063 cri.go:89] found id: ""
	I1003 18:17:53.464214   38063 logs.go:282] 0 containers: []
	W1003 18:17:53.464220   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:53.464225   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:53.464263   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:53.488529   38063 cri.go:89] found id: ""
	I1003 18:17:53.488542   38063 logs.go:282] 0 containers: []
	W1003 18:17:53.488550   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:53.488556   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:53.488612   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:53.512935   38063 cri.go:89] found id: ""
	I1003 18:17:53.512950   38063 logs.go:282] 0 containers: []
	W1003 18:17:53.512957   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:53.512962   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:53.513028   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:53.536738   38063 cri.go:89] found id: ""
	I1003 18:17:53.536754   38063 logs.go:282] 0 containers: []
	W1003 18:17:53.536763   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:53.536771   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:53.536784   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:53.602221   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:53.602237   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:53.613558   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:53.613573   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:53.667019   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:53.660222   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:53.660704   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:53.662310   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:53.662769   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:53.664227   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:53.660222   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:53.660704   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:53.662310   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:53.662769   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:53.664227   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:53.667029   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:53.667039   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:53.725461   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:53.725480   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:56.254692   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:56.264956   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:56.265017   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:56.289747   38063 cri.go:89] found id: ""
	I1003 18:17:56.289764   38063 logs.go:282] 0 containers: []
	W1003 18:17:56.289772   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:56.289779   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:56.289821   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:56.314478   38063 cri.go:89] found id: ""
	I1003 18:17:56.314493   38063 logs.go:282] 0 containers: []
	W1003 18:17:56.314501   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:56.314507   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:56.314557   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:56.338961   38063 cri.go:89] found id: ""
	I1003 18:17:56.338989   38063 logs.go:282] 0 containers: []
	W1003 18:17:56.338998   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:56.339004   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:56.339046   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:56.364770   38063 cri.go:89] found id: ""
	I1003 18:17:56.364784   38063 logs.go:282] 0 containers: []
	W1003 18:17:56.364789   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:56.364793   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:56.364832   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:56.391018   38063 cri.go:89] found id: ""
	I1003 18:17:56.391031   38063 logs.go:282] 0 containers: []
	W1003 18:17:56.391037   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:56.391041   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:56.391081   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:56.415373   38063 cri.go:89] found id: ""
	I1003 18:17:56.415389   38063 logs.go:282] 0 containers: []
	W1003 18:17:56.415398   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:56.415405   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:56.415447   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:56.439537   38063 cri.go:89] found id: ""
	I1003 18:17:56.439554   38063 logs.go:282] 0 containers: []
	W1003 18:17:56.439564   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:56.439572   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:56.439584   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:56.506236   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:56.506256   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:56.517260   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:56.517274   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:56.570626   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:56.564107   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:56.564604   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:56.566115   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:56.566514   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:56.568021   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:56.564107   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:56.564604   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:56.566115   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:56.566514   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:56.568021   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:56.570639   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:56.570658   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:56.633346   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:56.633369   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:59.161404   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:59.171988   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:59.172046   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:59.196437   38063 cri.go:89] found id: ""
	I1003 18:17:59.196449   38063 logs.go:282] 0 containers: []
	W1003 18:17:59.196455   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:59.196459   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:59.196498   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:59.220855   38063 cri.go:89] found id: ""
	I1003 18:17:59.220868   38063 logs.go:282] 0 containers: []
	W1003 18:17:59.220874   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:59.220878   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:59.220926   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:59.246564   38063 cri.go:89] found id: ""
	I1003 18:17:59.246579   38063 logs.go:282] 0 containers: []
	W1003 18:17:59.246587   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:59.246595   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:59.246655   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:59.271407   38063 cri.go:89] found id: ""
	I1003 18:17:59.271422   38063 logs.go:282] 0 containers: []
	W1003 18:17:59.271428   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:59.271433   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:59.271474   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:59.295265   38063 cri.go:89] found id: ""
	I1003 18:17:59.295281   38063 logs.go:282] 0 containers: []
	W1003 18:17:59.295290   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:59.295297   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:59.295344   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:59.319819   38063 cri.go:89] found id: ""
	I1003 18:17:59.319835   38063 logs.go:282] 0 containers: []
	W1003 18:17:59.319849   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:59.319853   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:59.319893   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:59.344045   38063 cri.go:89] found id: ""
	I1003 18:17:59.344058   38063 logs.go:282] 0 containers: []
	W1003 18:17:59.344064   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:59.344071   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:59.344080   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:59.411448   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:59.411465   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:59.422319   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:59.422332   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:59.475228   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:59.468454   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:59.468914   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:59.470455   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:59.470862   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:59.472347   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:59.468454   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:59.468914   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:59.470455   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:59.470862   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:59.472347   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:59.475255   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:59.475270   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:59.536088   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:59.536106   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:02.065737   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:02.076173   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:02.076214   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:02.101478   38063 cri.go:89] found id: ""
	I1003 18:18:02.101495   38063 logs.go:282] 0 containers: []
	W1003 18:18:02.101505   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:02.101513   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:02.101556   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:02.126528   38063 cri.go:89] found id: ""
	I1003 18:18:02.126541   38063 logs.go:282] 0 containers: []
	W1003 18:18:02.126547   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:02.126551   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:02.126591   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:02.150958   38063 cri.go:89] found id: ""
	I1003 18:18:02.150971   38063 logs.go:282] 0 containers: []
	W1003 18:18:02.150997   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:02.151003   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:02.151051   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:02.176464   38063 cri.go:89] found id: ""
	I1003 18:18:02.176478   38063 logs.go:282] 0 containers: []
	W1003 18:18:02.176485   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:02.176497   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:02.176539   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:02.201345   38063 cri.go:89] found id: ""
	I1003 18:18:02.201361   38063 logs.go:282] 0 containers: []
	W1003 18:18:02.201368   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:02.201373   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:02.201415   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:02.227338   38063 cri.go:89] found id: ""
	I1003 18:18:02.227352   38063 logs.go:282] 0 containers: []
	W1003 18:18:02.227359   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:02.227363   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:02.227407   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:02.253859   38063 cri.go:89] found id: ""
	I1003 18:18:02.253875   38063 logs.go:282] 0 containers: []
	W1003 18:18:02.253882   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:02.253890   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:02.253902   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:02.314960   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:02.314986   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:02.343587   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:02.343605   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:02.412159   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:02.412178   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:02.423525   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:02.423542   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:02.480478   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:02.473940   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:02.474565   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:02.476146   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:02.476539   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:02.477814   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:02.473940   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:02.474565   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:02.476146   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:02.476539   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:02.477814   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:04.981110   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:04.992430   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:04.992470   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:05.019218   38063 cri.go:89] found id: ""
	I1003 18:18:05.019232   38063 logs.go:282] 0 containers: []
	W1003 18:18:05.019238   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:05.019243   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:05.019282   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:05.042823   38063 cri.go:89] found id: ""
	I1003 18:18:05.042836   38063 logs.go:282] 0 containers: []
	W1003 18:18:05.042841   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:05.042845   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:05.042902   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:05.069124   38063 cri.go:89] found id: ""
	I1003 18:18:05.069141   38063 logs.go:282] 0 containers: []
	W1003 18:18:05.069148   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:05.069152   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:05.069196   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:05.093833   38063 cri.go:89] found id: ""
	I1003 18:18:05.093848   38063 logs.go:282] 0 containers: []
	W1003 18:18:05.093856   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:05.093862   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:05.093932   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:05.119454   38063 cri.go:89] found id: ""
	I1003 18:18:05.119468   38063 logs.go:282] 0 containers: []
	W1003 18:18:05.119475   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:05.119479   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:05.119523   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:05.143897   38063 cri.go:89] found id: ""
	I1003 18:18:05.143914   38063 logs.go:282] 0 containers: []
	W1003 18:18:05.143920   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:05.143925   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:05.143966   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:05.167637   38063 cri.go:89] found id: ""
	I1003 18:18:05.167650   38063 logs.go:282] 0 containers: []
	W1003 18:18:05.167656   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:05.167663   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:05.167674   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:05.195697   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:05.195715   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:05.260408   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:05.260428   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:05.271292   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:05.271309   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:05.324867   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:05.318440   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:05.318912   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:05.320332   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:05.320733   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:05.322261   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:05.318440   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:05.318912   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:05.320332   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:05.320733   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:05.322261   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:05.324886   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:05.324898   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:07.885833   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:07.895849   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:07.895957   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:07.921467   38063 cri.go:89] found id: ""
	I1003 18:18:07.921479   38063 logs.go:282] 0 containers: []
	W1003 18:18:07.921485   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:07.921490   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:07.921545   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:07.945467   38063 cri.go:89] found id: ""
	I1003 18:18:07.945480   38063 logs.go:282] 0 containers: []
	W1003 18:18:07.945487   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:07.945492   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:07.945539   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:07.970084   38063 cri.go:89] found id: ""
	I1003 18:18:07.970098   38063 logs.go:282] 0 containers: []
	W1003 18:18:07.970105   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:07.970110   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:07.970148   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:07.994263   38063 cri.go:89] found id: ""
	I1003 18:18:07.994278   38063 logs.go:282] 0 containers: []
	W1003 18:18:07.994287   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:07.994293   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:07.994334   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:08.018778   38063 cri.go:89] found id: ""
	I1003 18:18:08.018793   38063 logs.go:282] 0 containers: []
	W1003 18:18:08.018800   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:08.018805   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:08.018844   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:08.043138   38063 cri.go:89] found id: ""
	I1003 18:18:08.043153   38063 logs.go:282] 0 containers: []
	W1003 18:18:08.043159   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:08.043164   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:08.043203   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:08.067785   38063 cri.go:89] found id: ""
	I1003 18:18:08.067799   38063 logs.go:282] 0 containers: []
	W1003 18:18:08.067805   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:08.067811   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:08.067820   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:08.136408   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:08.136429   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:08.147427   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:08.147445   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:08.201110   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:08.194693   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:08.195161   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:08.196715   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:08.197135   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:08.198610   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:08.194693   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:08.195161   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:08.196715   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:08.197135   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:08.198610   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:08.201124   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:08.201135   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:08.261991   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:08.262010   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:10.791196   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:10.801467   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:10.801525   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:10.827655   38063 cri.go:89] found id: ""
	I1003 18:18:10.827672   38063 logs.go:282] 0 containers: []
	W1003 18:18:10.827683   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:10.827688   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:10.827735   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:10.852558   38063 cri.go:89] found id: ""
	I1003 18:18:10.852574   38063 logs.go:282] 0 containers: []
	W1003 18:18:10.852582   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:10.852588   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:10.852638   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:10.876842   38063 cri.go:89] found id: ""
	I1003 18:18:10.876858   38063 logs.go:282] 0 containers: []
	W1003 18:18:10.876870   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:10.876874   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:10.876918   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:10.902827   38063 cri.go:89] found id: ""
	I1003 18:18:10.902840   38063 logs.go:282] 0 containers: []
	W1003 18:18:10.902846   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:10.902851   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:10.902890   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:10.927840   38063 cri.go:89] found id: ""
	I1003 18:18:10.927855   38063 logs.go:282] 0 containers: []
	W1003 18:18:10.927861   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:10.927865   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:10.927909   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:10.952535   38063 cri.go:89] found id: ""
	I1003 18:18:10.952550   38063 logs.go:282] 0 containers: []
	W1003 18:18:10.952556   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:10.952561   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:10.952602   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:10.976585   38063 cri.go:89] found id: ""
	I1003 18:18:10.976601   38063 logs.go:282] 0 containers: []
	W1003 18:18:10.976610   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:10.976620   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:10.976631   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:10.987359   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:10.987373   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:11.041048   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:11.034604   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:11.035105   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:11.036603   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:11.036989   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:11.038508   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:11.034604   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:11.035105   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:11.036603   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:11.036989   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:11.038508   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:11.041058   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:11.041068   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:11.101637   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:11.101658   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:11.128867   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:11.128885   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:13.697689   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:13.708864   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:13.708949   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:13.733837   38063 cri.go:89] found id: ""
	I1003 18:18:13.733851   38063 logs.go:282] 0 containers: []
	W1003 18:18:13.733857   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:13.733864   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:13.733915   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:13.757681   38063 cri.go:89] found id: ""
	I1003 18:18:13.757698   38063 logs.go:282] 0 containers: []
	W1003 18:18:13.757707   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:13.757713   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:13.757778   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:13.782545   38063 cri.go:89] found id: ""
	I1003 18:18:13.782560   38063 logs.go:282] 0 containers: []
	W1003 18:18:13.782572   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:13.782576   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:13.782624   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:13.806939   38063 cri.go:89] found id: ""
	I1003 18:18:13.806955   38063 logs.go:282] 0 containers: []
	W1003 18:18:13.806964   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:13.806970   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:13.807041   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:13.831768   38063 cri.go:89] found id: ""
	I1003 18:18:13.831783   38063 logs.go:282] 0 containers: []
	W1003 18:18:13.831790   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:13.831795   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:13.831837   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:13.856076   38063 cri.go:89] found id: ""
	I1003 18:18:13.856093   38063 logs.go:282] 0 containers: []
	W1003 18:18:13.856101   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:13.856107   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:13.856163   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:13.879410   38063 cri.go:89] found id: ""
	I1003 18:18:13.879423   38063 logs.go:282] 0 containers: []
	W1003 18:18:13.879430   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:13.879438   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:13.879450   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:13.944708   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:13.944727   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:13.956175   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:13.956194   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:14.010487   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:14.003834   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:14.004418   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:14.005911   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:14.006368   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:14.007894   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:14.003834   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:14.004418   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:14.005911   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:14.006368   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:14.007894   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:14.010499   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:14.010514   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:14.071892   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:14.071911   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:16.601878   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:16.612139   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:16.612183   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:16.635115   38063 cri.go:89] found id: ""
	I1003 18:18:16.635128   38063 logs.go:282] 0 containers: []
	W1003 18:18:16.635134   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:16.635139   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:16.635180   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:16.660332   38063 cri.go:89] found id: ""
	I1003 18:18:16.660347   38063 logs.go:282] 0 containers: []
	W1003 18:18:16.660354   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:16.660361   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:16.660416   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:16.683528   38063 cri.go:89] found id: ""
	I1003 18:18:16.683551   38063 logs.go:282] 0 containers: []
	W1003 18:18:16.683560   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:16.683566   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:16.683619   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:16.708287   38063 cri.go:89] found id: ""
	I1003 18:18:16.708304   38063 logs.go:282] 0 containers: []
	W1003 18:18:16.708313   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:16.708319   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:16.708368   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:16.732627   38063 cri.go:89] found id: ""
	I1003 18:18:16.732642   38063 logs.go:282] 0 containers: []
	W1003 18:18:16.732651   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:16.732670   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:16.732712   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:16.757768   38063 cri.go:89] found id: ""
	I1003 18:18:16.757782   38063 logs.go:282] 0 containers: []
	W1003 18:18:16.757788   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:16.757793   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:16.757836   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:16.781970   38063 cri.go:89] found id: ""
	I1003 18:18:16.781997   38063 logs.go:282] 0 containers: []
	W1003 18:18:16.782011   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:16.782020   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:16.782036   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:16.850796   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:16.850813   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:16.862129   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:16.862143   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:16.915039   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:16.908470   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:16.908860   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:16.910345   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:16.910711   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:16.912263   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:16.908470   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:16.908860   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:16.910345   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:16.910711   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:16.912263   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:16.915050   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:16.915063   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:16.972388   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:16.972405   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:19.502094   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:19.512481   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:19.512541   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:19.537212   38063 cri.go:89] found id: ""
	I1003 18:18:19.537228   38063 logs.go:282] 0 containers: []
	W1003 18:18:19.537236   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:19.537242   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:19.537305   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:19.561717   38063 cri.go:89] found id: ""
	I1003 18:18:19.561734   38063 logs.go:282] 0 containers: []
	W1003 18:18:19.561741   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:19.561746   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:19.561793   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:19.585423   38063 cri.go:89] found id: ""
	I1003 18:18:19.585436   38063 logs.go:282] 0 containers: []
	W1003 18:18:19.585443   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:19.585447   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:19.585490   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:19.609708   38063 cri.go:89] found id: ""
	I1003 18:18:19.609722   38063 logs.go:282] 0 containers: []
	W1003 18:18:19.609728   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:19.609733   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:19.609772   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:19.632853   38063 cri.go:89] found id: ""
	I1003 18:18:19.632869   38063 logs.go:282] 0 containers: []
	W1003 18:18:19.632878   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:19.632884   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:19.632933   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:19.656204   38063 cri.go:89] found id: ""
	I1003 18:18:19.656220   38063 logs.go:282] 0 containers: []
	W1003 18:18:19.656228   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:19.656235   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:19.656287   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:19.680640   38063 cri.go:89] found id: ""
	I1003 18:18:19.680663   38063 logs.go:282] 0 containers: []
	W1003 18:18:19.680669   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:19.680677   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:19.680689   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:19.707259   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:19.707275   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:19.774362   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:19.774380   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:19.785563   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:19.785577   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:19.839901   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:19.833112   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:19.833732   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:19.835306   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:19.835682   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:19.837164   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:19.833112   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:19.833732   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:19.835306   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:19.835682   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:19.837164   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:19.839911   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:19.839921   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:22.400537   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:22.410712   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:22.410758   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:22.434956   38063 cri.go:89] found id: ""
	I1003 18:18:22.434970   38063 logs.go:282] 0 containers: []
	W1003 18:18:22.434988   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:22.434995   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:22.435050   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:22.459920   38063 cri.go:89] found id: ""
	I1003 18:18:22.459936   38063 logs.go:282] 0 containers: []
	W1003 18:18:22.459945   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:22.459950   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:22.460011   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:22.484807   38063 cri.go:89] found id: ""
	I1003 18:18:22.484821   38063 logs.go:282] 0 containers: []
	W1003 18:18:22.484827   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:22.484832   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:22.484876   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:22.510038   38063 cri.go:89] found id: ""
	I1003 18:18:22.510055   38063 logs.go:282] 0 containers: []
	W1003 18:18:22.510063   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:22.510069   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:22.510127   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:22.534586   38063 cri.go:89] found id: ""
	I1003 18:18:22.534606   38063 logs.go:282] 0 containers: []
	W1003 18:18:22.534616   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:22.534622   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:22.534684   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:22.559759   38063 cri.go:89] found id: ""
	I1003 18:18:22.559776   38063 logs.go:282] 0 containers: []
	W1003 18:18:22.559785   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:22.559791   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:22.559847   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:22.584554   38063 cri.go:89] found id: ""
	I1003 18:18:22.584569   38063 logs.go:282] 0 containers: []
	W1003 18:18:22.584579   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:22.584588   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:22.584602   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:22.653550   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:22.653568   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:22.664744   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:22.664760   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:22.718670   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:22.712190   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:22.712660   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:22.714209   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:22.714609   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:22.716119   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:22.712190   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:22.712660   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:22.714209   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:22.714609   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:22.716119   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:22.718679   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:22.718689   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:22.781634   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:22.781662   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:25.311342   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:25.321538   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:25.321589   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:25.347212   38063 cri.go:89] found id: ""
	I1003 18:18:25.347228   38063 logs.go:282] 0 containers: []
	W1003 18:18:25.347237   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:25.347244   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:25.347288   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:25.373240   38063 cri.go:89] found id: ""
	I1003 18:18:25.373255   38063 logs.go:282] 0 containers: []
	W1003 18:18:25.373261   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:25.373265   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:25.373316   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:25.398262   38063 cri.go:89] found id: ""
	I1003 18:18:25.398280   38063 logs.go:282] 0 containers: []
	W1003 18:18:25.398287   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:25.398293   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:25.398340   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:25.423522   38063 cri.go:89] found id: ""
	I1003 18:18:25.423536   38063 logs.go:282] 0 containers: []
	W1003 18:18:25.423544   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:25.423550   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:25.423609   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:25.448232   38063 cri.go:89] found id: ""
	I1003 18:18:25.448249   38063 logs.go:282] 0 containers: []
	W1003 18:18:25.448258   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:25.448264   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:25.448311   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:25.474690   38063 cri.go:89] found id: ""
	I1003 18:18:25.474704   38063 logs.go:282] 0 containers: []
	W1003 18:18:25.474710   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:25.474716   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:25.474766   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:25.499693   38063 cri.go:89] found id: ""
	I1003 18:18:25.499707   38063 logs.go:282] 0 containers: []
	W1003 18:18:25.499715   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:25.499723   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:25.499733   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:25.526210   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:25.526225   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:25.595354   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:25.595373   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:25.606969   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:25.606998   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:25.662186   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:25.655368   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:25.655970   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:25.657492   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:25.657931   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:25.659386   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:25.655368   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:25.655970   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:25.657492   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:25.657931   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:25.659386   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:25.662197   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:25.662206   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:28.226017   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:28.237132   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:28.237175   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:28.262449   38063 cri.go:89] found id: ""
	I1003 18:18:28.262466   38063 logs.go:282] 0 containers: []
	W1003 18:18:28.262474   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:28.262479   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:28.262524   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:28.287653   38063 cri.go:89] found id: ""
	I1003 18:18:28.287669   38063 logs.go:282] 0 containers: []
	W1003 18:18:28.287679   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:28.287685   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:28.287730   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:28.313255   38063 cri.go:89] found id: ""
	I1003 18:18:28.313269   38063 logs.go:282] 0 containers: []
	W1003 18:18:28.313276   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:28.313280   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:28.313321   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:28.338727   38063 cri.go:89] found id: ""
	I1003 18:18:28.338742   38063 logs.go:282] 0 containers: []
	W1003 18:18:28.338748   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:28.338752   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:28.338793   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:28.363285   38063 cri.go:89] found id: ""
	I1003 18:18:28.363303   38063 logs.go:282] 0 containers: []
	W1003 18:18:28.363312   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:28.363317   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:28.363359   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:28.388945   38063 cri.go:89] found id: ""
	I1003 18:18:28.388958   38063 logs.go:282] 0 containers: []
	W1003 18:18:28.388964   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:28.388969   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:28.389039   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:28.414591   38063 cri.go:89] found id: ""
	I1003 18:18:28.414607   38063 logs.go:282] 0 containers: []
	W1003 18:18:28.414614   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:28.414621   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:28.414630   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:28.425367   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:28.425382   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:28.479472   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:28.472065   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:28.472604   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:28.474900   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:28.475366   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:28.476874   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:28.472065   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:28.472604   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:28.474900   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:28.475366   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:28.476874   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:28.479481   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:28.479491   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:28.538844   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:28.538865   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:28.567294   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:28.567309   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:31.138009   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:31.148430   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:31.148480   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:31.173355   38063 cri.go:89] found id: ""
	I1003 18:18:31.173368   38063 logs.go:282] 0 containers: []
	W1003 18:18:31.173375   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:31.173380   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:31.173418   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:31.198151   38063 cri.go:89] found id: ""
	I1003 18:18:31.198166   38063 logs.go:282] 0 containers: []
	W1003 18:18:31.198181   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:31.198187   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:31.198231   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:31.223275   38063 cri.go:89] found id: ""
	I1003 18:18:31.223290   38063 logs.go:282] 0 containers: []
	W1003 18:18:31.223296   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:31.223300   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:31.223343   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:31.247221   38063 cri.go:89] found id: ""
	I1003 18:18:31.247237   38063 logs.go:282] 0 containers: []
	W1003 18:18:31.247248   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:31.247253   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:31.247310   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:31.270563   38063 cri.go:89] found id: ""
	I1003 18:18:31.270576   38063 logs.go:282] 0 containers: []
	W1003 18:18:31.270582   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:31.270586   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:31.270636   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:31.295134   38063 cri.go:89] found id: ""
	I1003 18:18:31.295150   38063 logs.go:282] 0 containers: []
	W1003 18:18:31.295159   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:31.295165   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:31.295204   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:31.319654   38063 cri.go:89] found id: ""
	I1003 18:18:31.319668   38063 logs.go:282] 0 containers: []
	W1003 18:18:31.319675   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:31.319683   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:31.319698   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:31.386428   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:31.386448   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:31.397662   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:31.397677   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:31.451288   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:31.444650   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:31.445190   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:31.446750   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:31.447199   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:31.448658   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:31.444650   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:31.445190   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:31.446750   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:31.447199   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:31.448658   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:31.451299   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:31.451309   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:31.510468   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:31.510487   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:34.039627   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:34.050185   38063 kubeadm.go:601] duration metric: took 4m1.950557888s to restartPrimaryControlPlane
	W1003 18:18:34.050251   38063 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1003 18:18:34.050324   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1003 18:18:34.501082   38063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:18:34.513430   38063 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 18:18:34.521102   38063 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:18:34.521139   38063 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:18:34.528531   38063 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:18:34.528540   38063 kubeadm.go:157] found existing configuration files:
	
	I1003 18:18:34.528574   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1003 18:18:34.535908   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:18:34.535967   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:18:34.543072   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1003 18:18:34.550220   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:18:34.550263   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:18:34.557251   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1003 18:18:34.565090   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:18:34.565130   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:18:34.571882   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1003 18:18:34.579174   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:18:34.579210   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:18:34.585996   38063 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:18:34.620715   38063 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:18:34.620773   38063 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:18:34.639243   38063 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:18:34.639317   38063 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 18:18:34.639360   38063 kubeadm.go:318] OS: Linux
	I1003 18:18:34.639397   38063 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:18:34.639466   38063 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:18:34.639529   38063 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:18:34.639587   38063 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:18:34.639687   38063 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:18:34.639749   38063 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:18:34.639803   38063 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:18:34.639863   38063 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 18:18:34.692781   38063 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:18:34.692898   38063 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:18:34.693025   38063 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:18:34.699300   38063 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:18:34.703358   38063 out.go:252]   - Generating certificates and keys ...
	I1003 18:18:34.703438   38063 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:18:34.703491   38063 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:18:34.703553   38063 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1003 18:18:34.703602   38063 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1003 18:18:34.703664   38063 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1003 18:18:34.703733   38063 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1003 18:18:34.703790   38063 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1003 18:18:34.703840   38063 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1003 18:18:34.703900   38063 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1003 18:18:34.703962   38063 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1003 18:18:34.704000   38063 kubeadm.go:318] [certs] Using the existing "sa" key
	I1003 18:18:34.704043   38063 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:18:34.953422   38063 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:18:35.214353   38063 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:18:35.447415   38063 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:18:35.645347   38063 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:18:36.220332   38063 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:18:36.220714   38063 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:18:36.222788   38063 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:18:36.225372   38063 out.go:252]   - Booting up control plane ...
	I1003 18:18:36.225492   38063 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:18:36.225605   38063 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:18:36.225672   38063 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:18:36.237955   38063 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:18:36.238117   38063 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:18:36.244390   38063 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:18:36.244573   38063 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:18:36.244608   38063 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:18:36.339701   38063 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:18:36.339860   38063 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:18:36.841336   38063 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.785786ms
	I1003 18:18:36.845100   38063 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:18:36.845207   38063 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1003 18:18:36.845308   38063 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:18:36.845418   38063 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:22:36.846410   38063 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001254073s
	I1003 18:22:36.846572   38063 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001316832s
	I1003 18:22:36.846680   38063 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00135784s
	I1003 18:22:36.846684   38063 kubeadm.go:318] 
	I1003 18:22:36.846803   38063 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 18:22:36.846887   38063 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:22:36.847019   38063 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 18:22:36.847152   38063 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 18:22:36.847221   38063 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 18:22:36.847290   38063 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 18:22:36.847293   38063 kubeadm.go:318] 
	I1003 18:22:36.850267   38063 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 18:22:36.850420   38063 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:22:36.851109   38063 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1003 18:22:36.851222   38063 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1003 18:22:36.851310   38063 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.785786ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001254073s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001316832s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00135784s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1003 18:22:36.851378   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1003 18:22:37.292774   38063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:22:37.305190   38063 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:22:37.305239   38063 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:22:37.312706   38063 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:22:37.312714   38063 kubeadm.go:157] found existing configuration files:
	
	I1003 18:22:37.312747   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1003 18:22:37.319873   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:22:37.319914   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:22:37.326628   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1003 18:22:37.333616   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:22:37.333654   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:22:37.340503   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1003 18:22:37.347489   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:22:37.347533   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:22:37.354448   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1003 18:22:37.361615   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:22:37.361649   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:22:37.368313   38063 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:22:37.421185   38063 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 18:22:37.475455   38063 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:26:40.291288   38063 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1003 18:26:40.291385   38063 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1003 18:26:40.294089   38063 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:26:40.294149   38063 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:26:40.294247   38063 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:26:40.294331   38063 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 18:26:40.294363   38063 kubeadm.go:318] OS: Linux
	I1003 18:26:40.294399   38063 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:26:40.294467   38063 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:26:40.294515   38063 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:26:40.294554   38063 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:26:40.294601   38063 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:26:40.294658   38063 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:26:40.294706   38063 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:26:40.294741   38063 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 18:26:40.294849   38063 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:26:40.294960   38063 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:26:40.295057   38063 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:26:40.295109   38063 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:26:40.297835   38063 out.go:252]   - Generating certificates and keys ...
	I1003 18:26:40.297914   38063 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:26:40.297990   38063 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:26:40.298082   38063 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1003 18:26:40.298152   38063 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1003 18:26:40.298217   38063 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1003 18:26:40.298275   38063 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1003 18:26:40.298326   38063 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1003 18:26:40.298376   38063 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1003 18:26:40.298444   38063 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1003 18:26:40.298519   38063 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1003 18:26:40.298554   38063 kubeadm.go:318] [certs] Using the existing "sa" key
	I1003 18:26:40.298605   38063 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:26:40.298646   38063 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:26:40.298698   38063 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:26:40.298740   38063 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:26:40.298791   38063 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:26:40.298839   38063 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:26:40.298907   38063 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:26:40.298998   38063 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:26:40.300468   38063 out.go:252]   - Booting up control plane ...
	I1003 18:26:40.300542   38063 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:26:40.300632   38063 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:26:40.300695   38063 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:26:40.300779   38063 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:26:40.300871   38063 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:26:40.300963   38063 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:26:40.301061   38063 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:26:40.301100   38063 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:26:40.301207   38063 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:26:40.301294   38063 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:26:40.301341   38063 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500810972s
	I1003 18:26:40.301415   38063 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:26:40.301479   38063 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1003 18:26:40.301550   38063 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:26:40.301629   38063 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:26:40.301688   38063 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001083242s
	I1003 18:26:40.301753   38063 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001112366s
	I1003 18:26:40.301845   38063 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001257154s
	I1003 18:26:40.301849   38063 kubeadm.go:318] 
	I1003 18:26:40.301925   38063 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 18:26:40.302009   38063 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:26:40.302080   38063 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 18:26:40.302157   38063 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 18:26:40.302217   38063 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 18:26:40.302288   38063 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 18:26:40.302308   38063 kubeadm.go:318] 
	I1003 18:26:40.302352   38063 kubeadm.go:402] duration metric: took 12m8.237590419s to StartCluster
	I1003 18:26:40.302401   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:26:40.302450   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:26:40.329135   38063 cri.go:89] found id: ""
	I1003 18:26:40.329148   38063 logs.go:282] 0 containers: []
	W1003 18:26:40.329154   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:26:40.329160   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:26:40.329203   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:26:40.354340   38063 cri.go:89] found id: ""
	I1003 18:26:40.354354   38063 logs.go:282] 0 containers: []
	W1003 18:26:40.354361   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:26:40.354366   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:26:40.354419   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:26:40.380556   38063 cri.go:89] found id: ""
	I1003 18:26:40.380570   38063 logs.go:282] 0 containers: []
	W1003 18:26:40.380576   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:26:40.380581   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:26:40.380640   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:26:40.406655   38063 cri.go:89] found id: ""
	I1003 18:26:40.406670   38063 logs.go:282] 0 containers: []
	W1003 18:26:40.406677   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:26:40.406683   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:26:40.406728   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:26:40.432698   38063 cri.go:89] found id: ""
	I1003 18:26:40.432713   38063 logs.go:282] 0 containers: []
	W1003 18:26:40.432720   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:26:40.432725   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:26:40.432769   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:26:40.459363   38063 cri.go:89] found id: ""
	I1003 18:26:40.459378   38063 logs.go:282] 0 containers: []
	W1003 18:26:40.459384   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:26:40.459390   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:26:40.459437   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:26:40.484951   38063 cri.go:89] found id: ""
	I1003 18:26:40.484964   38063 logs.go:282] 0 containers: []
	W1003 18:26:40.484971   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:26:40.484997   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:26:40.485019   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:26:40.549245   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:26:40.549263   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:26:40.560727   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:26:40.560741   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:26:40.616474   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:26:40.609386   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:40.610009   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:40.611564   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:40.611939   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:40.613451   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:26:40.609386   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:40.610009   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:40.611564   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:40.611939   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:40.613451   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:26:40.616500   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:26:40.616509   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:26:40.676470   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:26:40.676488   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1003 18:26:40.704576   38063 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500810972s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001083242s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001112366s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001257154s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1003 18:26:40.704638   38063 out.go:285] * 
	W1003 18:26:40.704701   38063 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500810972s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001083242s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001112366s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001257154s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:26:40.704715   38063 out.go:285] * 
	W1003 18:26:40.706538   38063 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:26:40.710390   38063 out.go:203] 
	W1003 18:26:40.711880   38063 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500810972s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001083242s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001112366s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001257154s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:26:40.711903   38063 out.go:285] * 
	I1003 18:26:40.714182   38063 out.go:203] 
	
	
	==> CRI-O <==
	Oct 03 18:26:34 functional-889240 crio[5881]: time="2025-10-03T18:26:34.948118628Z" level=info msg="createCtr: removing container 4a0da56a80b0bf9cf042a1ed29d0e9a46f1bcc83feb34f5c75fb117227f399ca" id=b2becadb-533e-4a54-9579-2ace7aeb4dbb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:34 functional-889240 crio[5881]: time="2025-10-03T18:26:34.948150012Z" level=info msg="createCtr: deleting container 4a0da56a80b0bf9cf042a1ed29d0e9a46f1bcc83feb34f5c75fb117227f399ca from storage" id=b2becadb-533e-4a54-9579-2ace7aeb4dbb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:34 functional-889240 crio[5881]: time="2025-10-03T18:26:34.950407562Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-889240_kube-system_7e715cb6024854d45a9fa99576167e43_0" id=b2becadb-533e-4a54-9579-2ace7aeb4dbb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:36 functional-889240 crio[5881]: time="2025-10-03T18:26:36.924698487Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=99144eec-10ba-48ad-9ef7-71167b1dc31a name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:36 functional-889240 crio[5881]: time="2025-10-03T18:26:36.925531495Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=d51ec181-c49a-48b5-b411-5c6c9b8cf406 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:36 functional-889240 crio[5881]: time="2025-10-03T18:26:36.926349562Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-889240/kube-apiserver" id=00fa20ab-4f0b-4c2e-8277-c8e0a21c8a69 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:36 functional-889240 crio[5881]: time="2025-10-03T18:26:36.926567549Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:26:36 functional-889240 crio[5881]: time="2025-10-03T18:26:36.929801236Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:26:36 functional-889240 crio[5881]: time="2025-10-03T18:26:36.930188171Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:26:36 functional-889240 crio[5881]: time="2025-10-03T18:26:36.944674069Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=00fa20ab-4f0b-4c2e-8277-c8e0a21c8a69 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:36 functional-889240 crio[5881]: time="2025-10-03T18:26:36.946023106Z" level=info msg="createCtr: deleting container ID 048a600cce13059b112019687fce28edbb01a74d78512f8f553ecbd9dafecbc2 from idIndex" id=00fa20ab-4f0b-4c2e-8277-c8e0a21c8a69 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:36 functional-889240 crio[5881]: time="2025-10-03T18:26:36.946054105Z" level=info msg="createCtr: removing container 048a600cce13059b112019687fce28edbb01a74d78512f8f553ecbd9dafecbc2" id=00fa20ab-4f0b-4c2e-8277-c8e0a21c8a69 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:36 functional-889240 crio[5881]: time="2025-10-03T18:26:36.946089326Z" level=info msg="createCtr: deleting container 048a600cce13059b112019687fce28edbb01a74d78512f8f553ecbd9dafecbc2 from storage" id=00fa20ab-4f0b-4c2e-8277-c8e0a21c8a69 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:36 functional-889240 crio[5881]: time="2025-10-03T18:26:36.948138665Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-889240_kube-system_9d9b7aefd7427246dd018814b6979298_0" id=00fa20ab-4f0b-4c2e-8277-c8e0a21c8a69 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:40 functional-889240 crio[5881]: time="2025-10-03T18:26:40.925002598Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=8286f6a4-e4d1-4e99-99c6-d455c86c17e2 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:40 functional-889240 crio[5881]: time="2025-10-03T18:26:40.925769076Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=a468d190-7222-4bd4-b6f0-08b15003496b name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:40 functional-889240 crio[5881]: time="2025-10-03T18:26:40.926574524Z" level=info msg="Creating container: kube-system/etcd-functional-889240/etcd" id=9a93a6e4-ad7e-48e5-a86a-fc4ec0b7612b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:40 functional-889240 crio[5881]: time="2025-10-03T18:26:40.926842103Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:26:40 functional-889240 crio[5881]: time="2025-10-03T18:26:40.931090446Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:26:40 functional-889240 crio[5881]: time="2025-10-03T18:26:40.931488821Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:26:40 functional-889240 crio[5881]: time="2025-10-03T18:26:40.947789662Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=9a93a6e4-ad7e-48e5-a86a-fc4ec0b7612b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:40 functional-889240 crio[5881]: time="2025-10-03T18:26:40.94915377Z" level=info msg="createCtr: deleting container ID fc25ddc2cae4957d13811e0d2971b92e2b7bac4ed5db09337f90301d1b9c8720 from idIndex" id=9a93a6e4-ad7e-48e5-a86a-fc4ec0b7612b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:40 functional-889240 crio[5881]: time="2025-10-03T18:26:40.949189492Z" level=info msg="createCtr: removing container fc25ddc2cae4957d13811e0d2971b92e2b7bac4ed5db09337f90301d1b9c8720" id=9a93a6e4-ad7e-48e5-a86a-fc4ec0b7612b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:40 functional-889240 crio[5881]: time="2025-10-03T18:26:40.949219845Z" level=info msg="createCtr: deleting container fc25ddc2cae4957d13811e0d2971b92e2b7bac4ed5db09337f90301d1b9c8720 from storage" id=9a93a6e4-ad7e-48e5-a86a-fc4ec0b7612b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:40 functional-889240 crio[5881]: time="2025-10-03T18:26:40.951628509Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-889240_kube-system_a73daf0147d5280c6db538ca59db9fe0_0" id=9a93a6e4-ad7e-48e5-a86a-fc4ec0b7612b name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:26:43.719802   15912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:43.720332   15912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:43.721948   15912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:43.722396   15912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:43.723880   15912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 3 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001870] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.374530] i8042: Warning: Keylock active
	[  +0.010846] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003424] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000658] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000637] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000691] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.479345] block sda: the capability attribute has been deprecated.
	[  +0.086934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025583] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.992810] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:26:43 up  1:09,  0 user,  load average: 0.08, 0.06, 0.04
	Linux functional-889240 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 03 18:26:36 functional-889240 kubelet[15004]: I1003 18:26:36.696467   15004 kubelet_node_status.go:75] "Attempting to register node" node="functional-889240"
	Oct 03 18:26:36 functional-889240 kubelet[15004]: E1003 18:26:36.696823   15004 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-889240"
	Oct 03 18:26:36 functional-889240 kubelet[15004]: E1003 18:26:36.924300   15004 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-889240\" not found" node="functional-889240"
	Oct 03 18:26:36 functional-889240 kubelet[15004]: E1003 18:26:36.948400   15004 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:26:36 functional-889240 kubelet[15004]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:26:36 functional-889240 kubelet[15004]:  > podSandboxID="d2a1f7a262459adddcbc8998558ca80ae50f332cedd95d5813e79fa17642c365"
	Oct 03 18:26:36 functional-889240 kubelet[15004]: E1003 18:26:36.948480   15004 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:26:36 functional-889240 kubelet[15004]:         container kube-apiserver start failed in pod kube-apiserver-functional-889240_kube-system(9d9b7aefd7427246dd018814b6979298): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:26:36 functional-889240 kubelet[15004]:  > logger="UnhandledError"
	Oct 03 18:26:36 functional-889240 kubelet[15004]: E1003 18:26:36.948509   15004 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-889240" podUID="9d9b7aefd7427246dd018814b6979298"
	Oct 03 18:26:37 functional-889240 kubelet[15004]: E1003 18:26:37.310073   15004 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-889240.186b0e42e698a181  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-889240,UID:functional-889240,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-889240 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-889240,},FirstTimestamp:2025-10-03 18:22:39.917703553 +0000 UTC m=+1.131431312,LastTimestamp:2025-10-03 18:22:39.917703553 +0000 UTC m=+1.131431312,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-889240,}"
	Oct 03 18:26:39 functional-889240 kubelet[15004]: E1003 18:26:39.261580   15004 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 03 18:26:39 functional-889240 kubelet[15004]: E1003 18:26:39.696867   15004 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 03 18:26:39 functional-889240 kubelet[15004]: E1003 18:26:39.939428   15004 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-889240\" not found"
	Oct 03 18:26:40 functional-889240 kubelet[15004]: E1003 18:26:40.924607   15004 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-889240\" not found" node="functional-889240"
	Oct 03 18:26:40 functional-889240 kubelet[15004]: E1003 18:26:40.951926   15004 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:26:40 functional-889240 kubelet[15004]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:26:40 functional-889240 kubelet[15004]:  > podSandboxID="816bf4aaa4990184bdc95c0d86d21e6c5c4acf1f357b2bf3229d2f1f717980c8"
	Oct 03 18:26:40 functional-889240 kubelet[15004]: E1003 18:26:40.952038   15004 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:26:40 functional-889240 kubelet[15004]:         container etcd start failed in pod etcd-functional-889240_kube-system(a73daf0147d5280c6db538ca59db9fe0): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:26:40 functional-889240 kubelet[15004]:  > logger="UnhandledError"
	Oct 03 18:26:40 functional-889240 kubelet[15004]: E1003 18:26:40.952069   15004 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-889240" podUID="a73daf0147d5280c6db538ca59db9fe0"
	Oct 03 18:26:43 functional-889240 kubelet[15004]: E1003 18:26:43.547345   15004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-889240?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 03 18:26:43 functional-889240 kubelet[15004]: I1003 18:26:43.698772   15004 kubelet_node_status.go:75] "Attempting to register node" node="functional-889240"
	Oct 03 18:26:43 functional-889240 kubelet[15004]: E1003 18:26:43.699160   15004 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-889240"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-889240 -n functional-889240
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-889240 -n functional-889240: exit status 2 (302.420561ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-889240" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (1.85s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-889240 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-889240 apply -f testdata/invalidsvc.yaml: exit status 1 (47.021563ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-889240 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-889240 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-889240 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-889240 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-889240 --alsologtostderr -v=1] stderr:
I1003 18:26:53.524551   59028 out.go:360] Setting OutFile to fd 1 ...
I1003 18:26:53.524913   59028 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 18:26:53.524931   59028 out.go:374] Setting ErrFile to fd 2...
I1003 18:26:53.524935   59028 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 18:26:53.525209   59028 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
I1003 18:26:53.525577   59028 mustload.go:65] Loading cluster: functional-889240
I1003 18:26:53.525988   59028 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1003 18:26:53.526414   59028 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
I1003 18:26:53.544355   59028 host.go:66] Checking if "functional-889240" exists ...
I1003 18:26:53.544732   59028 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1003 18:26:53.604319   59028 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-03 18:26:53.592663679 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1003 18:26:53.604469   59028 api_server.go:166] Checking apiserver status ...
I1003 18:26:53.604531   59028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1003 18:26:53.604590   59028 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
I1003 18:26:53.622039   59028 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
W1003 18:26:53.725664   59028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1003 18:26:53.727557   59028 out.go:179] * The control-plane node functional-889240 apiserver is not running: (state=Stopped)
I1003 18:26:53.728820   59028 out.go:179]   To start a cluster, run: "minikube start -p functional-889240"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-889240
helpers_test.go:243: (dbg) docker inspect functional-889240:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a",
	        "Created": "2025-10-03T17:59:56.619817507Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 26766,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T17:59:56.652603806Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/hostname",
	        "HostsPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/hosts",
	        "LogPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a-json.log",
	        "Name": "/functional-889240",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-889240:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-889240",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a",
	                "LowerDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2-init/diff:/var/lib/docker/overlay2/6a517a7375440eba803d7b83fe1e0821915758396dd4d8556ab64fff322a60c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-889240",
	                "Source": "/var/lib/docker/volumes/functional-889240/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-889240",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-889240",
	                "name.minikube.sigs.k8s.io": "functional-889240",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "da15d31dc23bdd4694ae9e3b61015d7ce0d61668c73d3e386422834c6f0321d8",
	            "SandboxKey": "/var/run/docker/netns/da15d31dc23b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-889240": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:9e:1d:e9:d9:ce",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "03281bed183d0817c0bc237b5c25093fc10222138aedde4c7deef5823759fa24",
	                    "EndpointID": "28fa584fdd6e253816ae08a2460ef02b91085c8a7996d55008876e3bd65bbc7e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-889240",
	                        "9f4f0f10b4a9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-889240 -n functional-889240
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-889240 -n functional-889240: exit status 2 (307.469236ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 logs -n 25
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image     │ functional-889240 image ls                                                                                                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image     │ functional-889240 image save kicbase/echo-server:functional-889240 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh       │ functional-889240 ssh findmnt -T /mount-9p | grep 9p                                                                                                            │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image     │ functional-889240 image rm kicbase/echo-server:functional-889240 --alsologtostderr                                                                              │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh       │ functional-889240 ssh -- ls -la /mount-9p                                                                                                                       │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image     │ functional-889240 image ls                                                                                                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh       │ functional-889240 ssh cat /mount-9p/test-1759516010263030140                                                                                                    │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image     │ functional-889240 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image     │ functional-889240 image save --daemon kicbase/echo-server:functional-889240 --alsologtostderr                                                                   │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh       │ functional-889240 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                                │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ ssh       │ functional-889240 ssh echo hello                                                                                                                                │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh       │ functional-889240 ssh sudo umount -f /mount-9p                                                                                                                  │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh       │ functional-889240 ssh cat /etc/hostname                                                                                                                         │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ tunnel    │ functional-889240 tunnel --alsologtostderr                                                                                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ tunnel    │ functional-889240 tunnel --alsologtostderr                                                                                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ mount     │ -p functional-889240 /tmp/TestFunctionalparallelMountCmdspecific-port3898317380/001:/mount-9p --alsologtostderr -v=1 --port 46464                               │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ ssh       │ functional-889240 ssh findmnt -T /mount-9p | grep 9p                                                                                                            │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ start     │ -p functional-889240 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                                       │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ tunnel    │ functional-889240 tunnel --alsologtostderr                                                                                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ start     │ -p functional-889240 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                 │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ start     │ -p functional-889240 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                                       │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ ssh       │ functional-889240 ssh findmnt -T /mount-9p | grep 9p                                                                                                            │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ dashboard │ --url --port 36195 -p functional-889240 --alsologtostderr -v=1                                                                                                  │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ ssh       │ functional-889240 ssh -- ls -la /mount-9p                                                                                                                       │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh       │ functional-889240 ssh sudo umount -f /mount-9p                                                                                                                  │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:26:53
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:26:53.356472   58930 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:26:53.356745   58930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:26:53.356756   58930 out.go:374] Setting ErrFile to fd 2...
	I1003 18:26:53.356762   58930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:26:53.357062   58930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:26:53.357508   58930 out.go:368] Setting JSON to false
	I1003 18:26:53.358398   58930 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4164,"bootTime":1759511849,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:26:53.358491   58930 start.go:140] virtualization: kvm guest
	I1003 18:26:53.360378   58930 out.go:179] * [functional-889240] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1003 18:26:53.361688   58930 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:26:53.361693   58930 notify.go:220] Checking for updates...
	I1003 18:26:53.363055   58930 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:26:53.364385   58930 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:26:53.365536   58930 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:26:53.366672   58930 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:26:53.367760   58930 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:26:53.369355   58930 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:26:53.369795   58930 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:26:53.393358   58930 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:26:53.393501   58930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:26:53.449005   58930 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-03 18:26:53.436272745 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:26:53.449135   58930 docker.go:318] overlay module found
	I1003 18:26:53.451084   58930 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1003 18:26:53.452223   58930 start.go:304] selected driver: docker
	I1003 18:26:53.452240   58930 start.go:924] validating driver "docker" against &{Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:26:53.452344   58930 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:26:53.454148   58930 out.go:203] 
	W1003 18:26:53.455299   58930 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1003 18:26:53.456336   58930 out.go:203] 
	
	
	==> CRI-O <==
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.31803988Z" level=info msg="Checking image status: kicbase/echo-server:functional-889240" id=8dbb9dc5-c0bc-4fb6-8380-f67e530bd701 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.351152043Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-889240" id=f8b0cdf9-8a3c-47b8-827a-041430aa968f name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.3512939Z" level=info msg="Image docker.io/kicbase/echo-server:functional-889240 not found" id=f8b0cdf9-8a3c-47b8-827a-041430aa968f name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.351334555Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-889240 found" id=f8b0cdf9-8a3c-47b8-827a-041430aa968f name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.385065792Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-889240" id=18ee7915-b6dc-477f-a8be-8e74388993fd name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.38551812Z" level=info msg="Image localhost/kicbase/echo-server:functional-889240 not found" id=18ee7915-b6dc-477f-a8be-8e74388993fd name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.385573149Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-889240 found" id=18ee7915-b6dc-477f-a8be-8e74388993fd name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.925244555Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=3cdd4878-6c29-4f9c-a7c1-e8d24b35f518 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.926186275Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=999b78e6-746a-4495-9410-a789f6c9b2d1 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.927345786Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-889240/kube-apiserver" id=a287f5c1-738c-44f4-93cf-fa8b273170d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.927608075Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.935572875Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.937507124Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.951683101Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=a287f5c1-738c-44f4-93cf-fa8b273170d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.953670541Z" level=info msg="createCtr: deleting container ID 94a45024cee963f25522950daa008598cc2b6a92c31321cf665c9a52bed71c52 from idIndex" id=a287f5c1-738c-44f4-93cf-fa8b273170d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.953726976Z" level=info msg="createCtr: removing container 94a45024cee963f25522950daa008598cc2b6a92c31321cf665c9a52bed71c52" id=a287f5c1-738c-44f4-93cf-fa8b273170d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.953775078Z" level=info msg="createCtr: deleting container 94a45024cee963f25522950daa008598cc2b6a92c31321cf665c9a52bed71c52 from storage" id=a287f5c1-738c-44f4-93cf-fa8b273170d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.957548935Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-889240_kube-system_9d9b7aefd7427246dd018814b6979298_0" id=a287f5c1-738c-44f4-93cf-fa8b273170d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:51 functional-889240 crio[5881]: time="2025-10-03T18:26:51.292108767Z" level=info msg="Checking image status: kicbase/echo-server:functional-889240" id=abbb2808-ed68-484b-b163-379c059f6d17 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:51 functional-889240 crio[5881]: time="2025-10-03T18:26:51.319279189Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-889240" id=d452395f-c84e-40da-918e-c48346047241 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:51 functional-889240 crio[5881]: time="2025-10-03T18:26:51.319865155Z" level=info msg="Image docker.io/kicbase/echo-server:functional-889240 not found" id=d452395f-c84e-40da-918e-c48346047241 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:51 functional-889240 crio[5881]: time="2025-10-03T18:26:51.319920152Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-889240 found" id=d452395f-c84e-40da-918e-c48346047241 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:51 functional-889240 crio[5881]: time="2025-10-03T18:26:51.352587677Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-889240" id=5d9b33fc-7d75-497a-8748-fc1b3d440fcc name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:51 functional-889240 crio[5881]: time="2025-10-03T18:26:51.352740621Z" level=info msg="Image localhost/kicbase/echo-server:functional-889240 not found" id=5d9b33fc-7d75-497a-8748-fc1b3d440fcc name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:51 functional-889240 crio[5881]: time="2025-10-03T18:26:51.352785301Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-889240 found" id=5d9b33fc-7d75-497a-8748-fc1b3d440fcc name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:26:54.799466   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:54.800063   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:54.801678   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:54.802263   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:54.804071   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 3 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001870] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.374530] i8042: Warning: Keylock active
	[  +0.010846] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003424] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000658] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000637] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000691] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.479345] block sda: the capability attribute has been deprecated.
	[  +0.086934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025583] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.992810] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:26:54 up  1:09,  0 user,  load average: 1.01, 0.26, 0.10
	Linux functional-889240 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 03 18:26:45 functional-889240 kubelet[15004]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-889240_kube-system(7e715cb6024854d45a9fa99576167e43): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:26:45 functional-889240 kubelet[15004]:  > logger="UnhandledError"
	Oct 03 18:26:45 functional-889240 kubelet[15004]: E1003 18:26:45.950027   15004 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-889240" podUID="7e715cb6024854d45a9fa99576167e43"
	Oct 03 18:26:47 functional-889240 kubelet[15004]: E1003 18:26:47.292830   15004 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-889240&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 03 18:26:47 functional-889240 kubelet[15004]: E1003 18:26:47.311203   15004 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-889240.186b0e42e698a181  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-889240,UID:functional-889240,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-889240 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-889240,},FirstTimestamp:2025-10-03 18:22:39.917703553 +0000 UTC m=+1.131431312,LastTimestamp:2025-10-03 18:22:39.917703553 +0000 UTC m=+1.131431312,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-889240,}"
	Oct 03 18:26:47 functional-889240 kubelet[15004]: E1003 18:26:47.924783   15004 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-889240\" not found" node="functional-889240"
	Oct 03 18:26:47 functional-889240 kubelet[15004]: E1003 18:26:47.967879   15004 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:26:47 functional-889240 kubelet[15004]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:26:47 functional-889240 kubelet[15004]:  > podSandboxID="cc37714218db619cb7a417ce510ab6d24921b06cab2510376343b7b5c57bba9a"
	Oct 03 18:26:47 functional-889240 kubelet[15004]: E1003 18:26:47.967997   15004 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:26:47 functional-889240 kubelet[15004]:         container kube-scheduler start failed in pod kube-scheduler-functional-889240_kube-system(7dadd1df42d6a2c3d1907f134f7d5ea7): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:26:47 functional-889240 kubelet[15004]:  > logger="UnhandledError"
	Oct 03 18:26:47 functional-889240 kubelet[15004]: E1003 18:26:47.968041   15004 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-889240" podUID="7dadd1df42d6a2c3d1907f134f7d5ea7"
	Oct 03 18:26:49 functional-889240 kubelet[15004]: E1003 18:26:49.940484   15004 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-889240\" not found"
	Oct 03 18:26:50 functional-889240 kubelet[15004]: E1003 18:26:50.548387   15004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-889240?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 03 18:26:50 functional-889240 kubelet[15004]: I1003 18:26:50.701447   15004 kubelet_node_status.go:75] "Attempting to register node" node="functional-889240"
	Oct 03 18:26:50 functional-889240 kubelet[15004]: E1003 18:26:50.702007   15004 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-889240"
	Oct 03 18:26:50 functional-889240 kubelet[15004]: E1003 18:26:50.924684   15004 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-889240\" not found" node="functional-889240"
	Oct 03 18:26:50 functional-889240 kubelet[15004]: E1003 18:26:50.958040   15004 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:26:50 functional-889240 kubelet[15004]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:26:50 functional-889240 kubelet[15004]:  > podSandboxID="d2a1f7a262459adddcbc8998558ca80ae50f332cedd95d5813e79fa17642c365"
	Oct 03 18:26:50 functional-889240 kubelet[15004]: E1003 18:26:50.958159   15004 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:26:50 functional-889240 kubelet[15004]:         container kube-apiserver start failed in pod kube-apiserver-functional-889240_kube-system(9d9b7aefd7427246dd018814b6979298): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:26:50 functional-889240 kubelet[15004]:  > logger="UnhandledError"
	Oct 03 18:26:50 functional-889240 kubelet[15004]: E1003 18:26:50.958199   15004 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-889240" podUID="9d9b7aefd7427246dd018814b6979298"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-889240 -n functional-889240
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-889240 -n functional-889240: exit status 2 (293.228959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-889240" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (3.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-889240 status: exit status 2 (354.636766ms)

                                                
                                                
-- stdout --
	functional-889240
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-amd64 -p functional-889240 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-889240 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (369.537542ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Running,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-amd64 -p functional-889240 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-889240 status -o json: exit status 2 (361.413237ms)

                                                
                                                
-- stdout --
	{"Name":"functional-889240","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-amd64 -p functional-889240 status -o json" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-889240
helpers_test.go:243: (dbg) docker inspect functional-889240:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a",
	        "Created": "2025-10-03T17:59:56.619817507Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 26766,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T17:59:56.652603806Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/hostname",
	        "HostsPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/hosts",
	        "LogPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a-json.log",
	        "Name": "/functional-889240",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-889240:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-889240",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a",
	                "LowerDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2-init/diff:/var/lib/docker/overlay2/6a517a7375440eba803d7b83fe1e0821915758396dd4d8556ab64fff322a60c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-889240",
	                "Source": "/var/lib/docker/volumes/functional-889240/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-889240",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-889240",
	                "name.minikube.sigs.k8s.io": "functional-889240",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "da15d31dc23bdd4694ae9e3b61015d7ce0d61668c73d3e386422834c6f0321d8",
	            "SandboxKey": "/var/run/docker/netns/da15d31dc23b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-889240": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:9e:1d:e9:d9:ce",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "03281bed183d0817c0bc237b5c25093fc10222138aedde4c7deef5823759fa24",
	                    "EndpointID": "28fa584fdd6e253816ae08a2460ef02b91085c8a7996d55008876e3bd65bbc7e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-889240",
	                        "9f4f0f10b4a9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-889240 -n functional-889240
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-889240 -n functional-889240: exit status 2 (332.672511ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-889240 logs -n 25: (1.061043041s)
helpers_test.go:260: TestFunctional/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ config  │ functional-889240 config get cpus                                                                                                                               │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ service │ functional-889240 service list -o json                                                                                                                          │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ ssh     │ functional-889240 ssh sudo systemctl is-active containerd                                                                                                       │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ cp      │ functional-889240 cp functional-889240:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd328528260/001/cp-test.txt                                       │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ service │ functional-889240 service --namespace=default --https --url hello-node                                                                                          │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ service │ functional-889240 service hello-node --url --format={{.IP}}                                                                                                     │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ ssh     │ functional-889240 ssh -n functional-889240 sudo cat /home/docker/cp-test.txt                                                                                    │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image   │ functional-889240 image load --daemon kicbase/echo-server:functional-889240 --alsologtostderr                                                                   │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ service │ functional-889240 service hello-node --url                                                                                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ cp      │ functional-889240 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                       │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh     │ functional-889240 ssh -n functional-889240 sudo cat /tmp/does/not/exist/cp-test.txt                                                                             │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image   │ functional-889240 image ls                                                                                                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh     │ functional-889240 ssh sudo cat /etc/ssl/certs/12212.pem                                                                                                         │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image   │ functional-889240 image load --daemon kicbase/echo-server:functional-889240 --alsologtostderr                                                                   │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh     │ functional-889240 ssh sudo cat /usr/share/ca-certificates/12212.pem                                                                                             │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh     │ functional-889240 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image   │ functional-889240 image ls                                                                                                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh     │ functional-889240 ssh sudo cat /etc/ssl/certs/122122.pem                                                                                                        │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh     │ functional-889240 ssh sudo cat /usr/share/ca-certificates/122122.pem                                                                                            │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image   │ functional-889240 image load --daemon kicbase/echo-server:functional-889240 --alsologtostderr                                                                   │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh     │ functional-889240 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh     │ functional-889240 ssh findmnt -T /mount-9p | grep 9p                                                                                                            │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ mount   │ -p functional-889240 /tmp/TestFunctionalparallelMountCmdany-port2363591403/001:/mount-9p --alsologtostderr -v=1                                                 │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ image   │ functional-889240 image ls                                                                                                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image   │ functional-889240 image save kicbase/echo-server:functional-889240 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:14:28
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:14:28.726754   38063 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:14:28.726997   38063 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:14:28.727000   38063 out.go:374] Setting ErrFile to fd 2...
	I1003 18:14:28.727003   38063 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:14:28.727268   38063 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:14:28.727968   38063 out.go:368] Setting JSON to false
	I1003 18:14:28.729004   38063 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3420,"bootTime":1759511849,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:14:28.729075   38063 start.go:140] virtualization: kvm guest
	I1003 18:14:28.731008   38063 out.go:179] * [functional-889240] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 18:14:28.732488   38063 notify.go:220] Checking for updates...
	I1003 18:14:28.732492   38063 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:14:28.733579   38063 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:14:28.734939   38063 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:14:28.736179   38063 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:14:28.737411   38063 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:14:28.738587   38063 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:14:28.740087   38063 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:14:28.740180   38063 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:14:28.764594   38063 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:14:28.764685   38063 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:14:28.818292   38063 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-03 18:14:28.807876558 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:14:28.818395   38063 docker.go:318] overlay module found
	I1003 18:14:28.820263   38063 out.go:179] * Using the docker driver based on existing profile
	I1003 18:14:28.821380   38063 start.go:304] selected driver: docker
	I1003 18:14:28.821386   38063 start.go:924] validating driver "docker" against &{Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:14:28.821453   38063 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:14:28.821525   38063 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:14:28.873759   38063 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-03 18:14:28.863222744 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:14:28.874408   38063 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:14:28.874443   38063 cni.go:84] Creating CNI manager for ""
	I1003 18:14:28.874490   38063 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 18:14:28.874537   38063 start.go:348] cluster config:
	{Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:14:28.876500   38063 out.go:179] * Starting "functional-889240" primary control-plane node in "functional-889240" cluster
	I1003 18:14:28.877706   38063 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:14:28.878837   38063 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:14:28.879769   38063 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:14:28.879795   38063 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 18:14:28.879802   38063 cache.go:58] Caching tarball of preloaded images
	I1003 18:14:28.879865   38063 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:14:28.879873   38063 preload.go:233] Found /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1003 18:14:28.879879   38063 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 18:14:28.879967   38063 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/config.json ...
	I1003 18:14:28.899017   38063 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 18:14:28.899026   38063 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 18:14:28.899040   38063 cache.go:232] Successfully downloaded all kic artifacts
	I1003 18:14:28.899069   38063 start.go:360] acquireMachinesLock for functional-889240: {Name:mk6750a9fb1c1c3747b0abf2aebe2a2d0047ae3a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:14:28.899117   38063 start.go:364] duration metric: took 35.993µs to acquireMachinesLock for "functional-889240"
	I1003 18:14:28.899130   38063 start.go:96] Skipping create...Using existing machine configuration
	I1003 18:14:28.899133   38063 fix.go:54] fixHost starting: 
	I1003 18:14:28.899327   38063 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
	I1003 18:14:28.916111   38063 fix.go:112] recreateIfNeeded on functional-889240: state=Running err=<nil>
	W1003 18:14:28.916134   38063 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 18:14:28.918050   38063 out.go:252] * Updating the running docker "functional-889240" container ...
	I1003 18:14:28.918084   38063 machine.go:93] provisionDockerMachine start ...
	I1003 18:14:28.918165   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:28.934689   38063 main.go:141] libmachine: Using SSH client type: native
	I1003 18:14:28.934913   38063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:14:28.934921   38063 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 18:14:29.076697   38063 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-889240
	
	I1003 18:14:29.076727   38063 ubuntu.go:182] provisioning hostname "functional-889240"
	I1003 18:14:29.076782   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:29.092887   38063 main.go:141] libmachine: Using SSH client type: native
	I1003 18:14:29.093101   38063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:14:29.093108   38063 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-889240 && echo "functional-889240" | sudo tee /etc/hostname
	I1003 18:14:29.242886   38063 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-889240
	
	I1003 18:14:29.242996   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:29.260006   38063 main.go:141] libmachine: Using SSH client type: native
	I1003 18:14:29.260203   38063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:14:29.260220   38063 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-889240' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-889240/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-889240' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:14:29.401432   38063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:14:29.401463   38063 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8669/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8669/.minikube}
	I1003 18:14:29.401485   38063 ubuntu.go:190] setting up certificates
	I1003 18:14:29.401496   38063 provision.go:84] configureAuth start
	I1003 18:14:29.401542   38063 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-889240
	I1003 18:14:29.417679   38063 provision.go:143] copyHostCerts
	I1003 18:14:29.417732   38063 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem, removing ...
	I1003 18:14:29.417754   38063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:14:29.417818   38063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem (1082 bytes)
	I1003 18:14:29.417930   38063 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem, removing ...
	I1003 18:14:29.417934   38063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:14:29.417959   38063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem (1123 bytes)
	I1003 18:14:29.418062   38063 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem, removing ...
	I1003 18:14:29.418066   38063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:14:29.418091   38063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem (1675 bytes)
	I1003 18:14:29.418151   38063 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem org=jenkins.functional-889240 san=[127.0.0.1 192.168.49.2 functional-889240 localhost minikube]
	I1003 18:14:29.517156   38063 provision.go:177] copyRemoteCerts
	I1003 18:14:29.517211   38063 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:14:29.517244   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:29.534610   38063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:14:29.634576   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 18:14:29.651152   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1003 18:14:29.667404   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 18:14:29.683300   38063 provision.go:87] duration metric: took 281.795524ms to configureAuth
	I1003 18:14:29.683315   38063 ubuntu.go:206] setting minikube options for container-runtime
	I1003 18:14:29.683451   38063 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:14:29.683536   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:29.701238   38063 main.go:141] libmachine: Using SSH client type: native
	I1003 18:14:29.701444   38063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:14:29.701460   38063 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 18:14:29.964774   38063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 18:14:29.964789   38063 machine.go:96] duration metric: took 1.046699275s to provisionDockerMachine
	I1003 18:14:29.964799   38063 start.go:293] postStartSetup for "functional-889240" (driver="docker")
	I1003 18:14:29.964807   38063 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:14:29.964862   38063 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:14:29.964919   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:29.982141   38063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:14:30.082849   38063 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:14:30.086167   38063 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:14:30.086182   38063 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 18:14:30.086190   38063 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/addons for local assets ...
	I1003 18:14:30.086245   38063 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/files for local assets ...
	I1003 18:14:30.086322   38063 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> 122122.pem in /etc/ssl/certs
	I1003 18:14:30.086390   38063 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/test/nested/copy/12212/hosts -> hosts in /etc/test/nested/copy/12212
	I1003 18:14:30.086418   38063 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/12212
	I1003 18:14:30.093540   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:14:30.109775   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/test/nested/copy/12212/hosts --> /etc/test/nested/copy/12212/hosts (40 bytes)
	I1003 18:14:30.125563   38063 start.go:296] duration metric: took 160.752264ms for postStartSetup
	I1003 18:14:30.125613   38063 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:14:30.125652   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:30.142705   38063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:14:30.239819   38063 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:14:30.244462   38063 fix.go:56] duration metric: took 1.345323072s for fixHost
	I1003 18:14:30.244476   38063 start.go:83] releasing machines lock for "functional-889240", held for 1.345352654s
	I1003 18:14:30.244534   38063 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-889240
	I1003 18:14:30.261148   38063 ssh_runner.go:195] Run: cat /version.json
	I1003 18:14:30.261181   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:30.261277   38063 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:14:30.261317   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:30.278533   38063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:14:30.278911   38063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:14:30.374843   38063 ssh_runner.go:195] Run: systemctl --version
	I1003 18:14:30.426119   38063 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 18:14:30.460148   38063 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 18:14:30.464555   38063 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 18:14:30.464600   38063 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 18:14:30.471950   38063 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1003 18:14:30.471961   38063 start.go:495] detecting cgroup driver to use...
	I1003 18:14:30.472000   38063 detect.go:190] detected "systemd" cgroup driver on host os
	I1003 18:14:30.472044   38063 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 18:14:30.485257   38063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 18:14:30.496477   38063 docker.go:218] disabling cri-docker service (if available) ...
	I1003 18:14:30.496516   38063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 18:14:30.510101   38063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 18:14:30.521418   38063 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 18:14:30.603143   38063 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 18:14:30.686683   38063 docker.go:234] disabling docker service ...
	I1003 18:14:30.686723   38063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 18:14:30.700010   38063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 18:14:30.711397   38063 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 18:14:30.789401   38063 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 18:14:30.867745   38063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 18:14:30.879595   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:14:30.892654   38063 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 18:14:30.892698   38063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:14:30.901033   38063 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 18:14:30.901080   38063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:14:30.909297   38063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:14:30.917346   38063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:14:30.925200   38063 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:14:30.932963   38063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:14:30.941075   38063 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:14:30.948857   38063 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:14:30.956661   38063 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:14:30.963293   38063 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:14:30.969876   38063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:14:31.048833   38063 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 18:14:31.154686   38063 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 18:14:31.154732   38063 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 18:14:31.158463   38063 start.go:563] Will wait 60s for crictl version
	I1003 18:14:31.158505   38063 ssh_runner.go:195] Run: which crictl
	I1003 18:14:31.161802   38063 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 18:14:31.185028   38063 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 18:14:31.185099   38063 ssh_runner.go:195] Run: crio --version
	I1003 18:14:31.211351   38063 ssh_runner.go:195] Run: crio --version
	I1003 18:14:31.239599   38063 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 18:14:31.241121   38063 cli_runner.go:164] Run: docker network inspect functional-889240 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:14:31.257340   38063 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 18:14:31.263166   38063 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1003 18:14:31.264167   38063 kubeadm.go:883] updating cluster {Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 18:14:31.264267   38063 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:14:31.264310   38063 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:14:31.293848   38063 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:14:31.293858   38063 crio.go:433] Images already preloaded, skipping extraction
	I1003 18:14:31.293907   38063 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:14:31.319316   38063 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:14:31.319326   38063 cache_images.go:85] Images are preloaded, skipping loading
	I1003 18:14:31.319331   38063 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1003 18:14:31.319423   38063 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-889240 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 18:14:31.319482   38063 ssh_runner.go:195] Run: crio config
	I1003 18:14:31.363053   38063 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1003 18:14:31.363070   38063 cni.go:84] Creating CNI manager for ""
	I1003 18:14:31.363079   38063 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 18:14:31.363097   38063 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 18:14:31.363115   38063 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-889240 NodeName:functional-889240 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 18:14:31.363211   38063 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-889240"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:14:31.363260   38063 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 18:14:31.371060   38063 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:14:31.371113   38063 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 18:14:31.378260   38063 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1003 18:14:31.389622   38063 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 18:14:31.401169   38063 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1003 18:14:31.413278   38063 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1003 18:14:31.416670   38063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:14:31.493997   38063 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:14:31.506325   38063 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240 for IP: 192.168.49.2
	I1003 18:14:31.506337   38063 certs.go:195] generating shared ca certs ...
	I1003 18:14:31.506355   38063 certs.go:227] acquiring lock for ca certs: {Name:mk92d1e8e469cb44d9924ff8abf5ecf0a8ce4e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:14:31.506504   38063 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key
	I1003 18:14:31.506539   38063 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key
	I1003 18:14:31.506544   38063 certs.go:257] generating profile certs ...
	I1003 18:14:31.506611   38063 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.key
	I1003 18:14:31.506654   38063 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.key.eb3f8f7c
	I1003 18:14:31.506684   38063 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.key
	I1003 18:14:31.506800   38063 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem (1338 bytes)
	W1003 18:14:31.506838   38063 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212_empty.pem, impossibly tiny 0 bytes
	I1003 18:14:31.506844   38063 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:14:31.506863   38063 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:14:31.506885   38063 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:14:31.506914   38063 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem (1675 bytes)
	I1003 18:14:31.506949   38063 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:14:31.507555   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:14:31.523949   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 18:14:31.540075   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:14:31.556229   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:14:31.572472   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 18:14:31.588618   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:14:31.604606   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:14:31.620082   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 18:14:31.636014   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:14:31.652102   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem --> /usr/share/ca-certificates/12212.pem (1338 bytes)
	I1003 18:14:31.668081   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /usr/share/ca-certificates/122122.pem (1708 bytes)
	I1003 18:14:31.684503   38063 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:14:31.696104   38063 ssh_runner.go:195] Run: openssl version
	I1003 18:14:31.701806   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:14:31.709474   38063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:14:31.712729   38063 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:14:31.712776   38063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:14:31.746262   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:14:31.754238   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12212.pem && ln -fs /usr/share/ca-certificates/12212.pem /etc/ssl/certs/12212.pem"
	I1003 18:14:31.762041   38063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12212.pem
	I1003 18:14:31.765354   38063 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:14:31.765385   38063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12212.pem
	I1003 18:14:31.799341   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12212.pem /etc/ssl/certs/51391683.0"
	I1003 18:14:31.807532   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122122.pem && ln -fs /usr/share/ca-certificates/122122.pem /etc/ssl/certs/122122.pem"
	I1003 18:14:31.815668   38063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122122.pem
	I1003 18:14:31.819149   38063 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:14:31.819195   38063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122122.pem
	I1003 18:14:31.853378   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122122.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:14:31.861557   38063 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:14:31.865026   38063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 18:14:31.898216   38063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 18:14:31.931439   38063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 18:14:31.964848   38063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 18:14:31.997996   38063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 18:14:32.031331   38063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 18:14:32.064773   38063 kubeadm.go:400] StartCluster: {Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:14:32.064844   38063 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:14:32.064884   38063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:14:32.091563   38063 cri.go:89] found id: ""
	I1003 18:14:32.091628   38063 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:14:32.099575   38063 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1003 18:14:32.099617   38063 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1003 18:14:32.099649   38063 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 18:14:32.106476   38063 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:14:32.106922   38063 kubeconfig.go:125] found "functional-889240" server: "https://192.168.49.2:8441"
	I1003 18:14:32.108169   38063 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 18:14:32.115724   38063 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-03 18:00:01.716218369 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-03 18:14:31.411258298 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1003 18:14:32.115731   38063 kubeadm.go:1160] stopping kube-system containers ...
	I1003 18:14:32.115740   38063 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1003 18:14:32.115779   38063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:14:32.142745   38063 cri.go:89] found id: ""
	I1003 18:14:32.142803   38063 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1003 18:14:32.181602   38063 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:14:32.189432   38063 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct  3 18:04 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Oct  3 18:04 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Oct  3 18:04 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct  3 18:04 /etc/kubernetes/scheduler.conf
	
	I1003 18:14:32.189481   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1003 18:14:32.196894   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1003 18:14:32.203921   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:14:32.203965   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:14:32.210881   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1003 18:14:32.217766   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:14:32.217803   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:14:32.224334   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1003 18:14:32.231030   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:14:32.231065   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:14:32.237472   38063 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 18:14:32.244457   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1003 18:14:32.283268   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1003 18:14:33.742947   38063 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.459652347s)
	I1003 18:14:33.743017   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1003 18:14:33.898116   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1003 18:14:33.942573   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1003 18:14:33.988522   38063 api_server.go:52] waiting for apiserver process to appear ...
	I1003 18:14:33.988576   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:34.488790   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:34.989160   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:35.489680   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:35.988868   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:36.488719   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:36.989189   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:37.488931   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:37.988689   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:38.489192   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:38.988747   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:39.488853   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:39.988726   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:40.488885   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:40.988836   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:41.489087   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:41.989102   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:42.489308   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:42.989350   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:43.489437   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:43.989370   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:44.489479   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:44.989473   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:45.489475   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:45.989163   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:46.489071   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:46.989061   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:47.489362   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:47.989160   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:48.489058   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:48.989044   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:49.489308   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:49.989261   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:50.489305   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:50.989055   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:51.488843   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:51.989620   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:52.489351   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:52.989238   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:53.489255   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:53.989220   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:54.488852   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:54.988693   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:55.488676   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:55.989529   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:56.488743   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:56.988770   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:57.489696   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:57.989499   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:58.489418   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:58.988677   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:59.488958   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:59.988929   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:00.488655   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:00.989293   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:01.489448   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:01.989466   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:02.489205   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:02.989600   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:03.489423   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:03.989351   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:04.489050   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:04.989610   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:05.489685   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:05.988959   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:06.488882   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:06.988912   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:07.488777   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:07.988801   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:08.489543   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:08.989468   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:09.489298   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:09.989123   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:10.489003   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:10.988801   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:11.489568   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:11.989184   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:12.489371   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:12.989143   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:13.488941   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:13.988874   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:14.489673   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:14.989633   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:15.489486   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:15.989281   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:16.489642   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:16.989478   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:17.489111   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:17.989045   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:18.488802   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:18.988734   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:19.489569   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:19.989541   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:20.488747   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:20.989602   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:21.488839   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:21.989691   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:22.489669   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:22.989667   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:23.489632   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:23.989542   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:24.489501   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:24.989204   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:25.488757   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:25.989320   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:26.489097   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:26.988902   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:27.489585   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:27.989335   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:28.489024   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:28.988936   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:29.488782   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:29.989706   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:30.489391   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:30.989093   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:31.488928   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:31.988795   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:32.488796   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:32.988671   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:33.489525   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:33.989163   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:33.989216   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:34.014490   38063 cri.go:89] found id: ""
	I1003 18:15:34.014506   38063 logs.go:282] 0 containers: []
	W1003 18:15:34.014513   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:34.014518   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:34.014556   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:34.039203   38063 cri.go:89] found id: ""
	I1003 18:15:34.039217   38063 logs.go:282] 0 containers: []
	W1003 18:15:34.039223   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:34.039227   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:34.039266   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:34.064423   38063 cri.go:89] found id: ""
	I1003 18:15:34.064440   38063 logs.go:282] 0 containers: []
	W1003 18:15:34.064448   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:34.064452   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:34.064494   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:34.089636   38063 cri.go:89] found id: ""
	I1003 18:15:34.089650   38063 logs.go:282] 0 containers: []
	W1003 18:15:34.089661   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:34.089665   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:34.089707   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:34.114198   38063 cri.go:89] found id: ""
	I1003 18:15:34.114211   38063 logs.go:282] 0 containers: []
	W1003 18:15:34.114217   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:34.114221   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:34.114261   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:34.138167   38063 cri.go:89] found id: ""
	I1003 18:15:34.138180   38063 logs.go:282] 0 containers: []
	W1003 18:15:34.138186   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:34.138190   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:34.138234   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:34.163057   38063 cri.go:89] found id: ""
	I1003 18:15:34.163071   38063 logs.go:282] 0 containers: []
	W1003 18:15:34.163079   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:34.163090   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:34.163102   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:34.230868   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:34.230885   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:34.242117   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:34.242134   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:34.296197   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:34.289745    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:34.290228    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:34.291731    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:34.292260    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:34.293746    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:34.289745    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:34.290228    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:34.291731    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:34.292260    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:34.293746    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:15:34.296208   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:34.296218   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:34.353696   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:34.353715   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:36.882850   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:36.893827   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:36.893878   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:36.918928   38063 cri.go:89] found id: ""
	I1003 18:15:36.918945   38063 logs.go:282] 0 containers: []
	W1003 18:15:36.918954   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:36.918960   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:36.919024   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:36.943500   38063 cri.go:89] found id: ""
	I1003 18:15:36.943516   38063 logs.go:282] 0 containers: []
	W1003 18:15:36.943524   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:36.943529   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:36.943571   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:36.967892   38063 cri.go:89] found id: ""
	I1003 18:15:36.967909   38063 logs.go:282] 0 containers: []
	W1003 18:15:36.967917   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:36.967921   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:36.967961   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:36.992302   38063 cri.go:89] found id: ""
	I1003 18:15:36.992316   38063 logs.go:282] 0 containers: []
	W1003 18:15:36.992322   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:36.992326   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:36.992371   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:37.017414   38063 cri.go:89] found id: ""
	I1003 18:15:37.017429   38063 logs.go:282] 0 containers: []
	W1003 18:15:37.017435   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:37.017440   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:37.017483   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:37.042577   38063 cri.go:89] found id: ""
	I1003 18:15:37.042596   38063 logs.go:282] 0 containers: []
	W1003 18:15:37.042601   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:37.042606   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:37.042652   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:37.067424   38063 cri.go:89] found id: ""
	I1003 18:15:37.067438   38063 logs.go:282] 0 containers: []
	W1003 18:15:37.067444   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:37.067451   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:37.067466   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:37.133058   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:37.133076   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:37.144095   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:37.144109   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:37.201432   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:37.195051    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:37.195552    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:37.197089    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:37.197493    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:37.198600    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:37.195051    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:37.195552    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:37.197089    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:37.197493    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:37.198600    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:15:37.201453   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:37.201464   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:37.264020   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:37.264041   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:39.793917   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:39.804160   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:39.804201   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:39.828532   38063 cri.go:89] found id: ""
	I1003 18:15:39.828545   38063 logs.go:282] 0 containers: []
	W1003 18:15:39.828551   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:39.828557   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:39.828595   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:39.854181   38063 cri.go:89] found id: ""
	I1003 18:15:39.854194   38063 logs.go:282] 0 containers: []
	W1003 18:15:39.854199   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:39.854203   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:39.854241   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:39.878636   38063 cri.go:89] found id: ""
	I1003 18:15:39.878649   38063 logs.go:282] 0 containers: []
	W1003 18:15:39.878655   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:39.878665   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:39.878714   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:39.903647   38063 cri.go:89] found id: ""
	I1003 18:15:39.903662   38063 logs.go:282] 0 containers: []
	W1003 18:15:39.903672   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:39.903678   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:39.903727   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:39.928358   38063 cri.go:89] found id: ""
	I1003 18:15:39.928371   38063 logs.go:282] 0 containers: []
	W1003 18:15:39.928377   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:39.928382   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:39.928425   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:39.952698   38063 cri.go:89] found id: ""
	I1003 18:15:39.952712   38063 logs.go:282] 0 containers: []
	W1003 18:15:39.952718   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:39.952722   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:39.952770   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:39.977762   38063 cri.go:89] found id: ""
	I1003 18:15:39.977779   38063 logs.go:282] 0 containers: []
	W1003 18:15:39.977788   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:39.977798   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:39.977810   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:40.047503   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:40.047521   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:40.058597   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:40.058612   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:40.113456   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:40.107101    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:40.107593    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:40.109120    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:40.109527    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:40.111020    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:40.107101    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:40.107593    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:40.109120    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:40.109527    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:40.111020    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:15:40.113474   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:40.113485   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:40.173884   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:40.173904   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:42.702098   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:42.712135   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:42.712176   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:42.735423   38063 cri.go:89] found id: ""
	I1003 18:15:42.735438   38063 logs.go:282] 0 containers: []
	W1003 18:15:42.735445   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:42.735450   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:42.735502   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:42.758834   38063 cri.go:89] found id: ""
	I1003 18:15:42.758847   38063 logs.go:282] 0 containers: []
	W1003 18:15:42.758853   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:42.758857   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:42.758918   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:42.782548   38063 cri.go:89] found id: ""
	I1003 18:15:42.782564   38063 logs.go:282] 0 containers: []
	W1003 18:15:42.782573   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:42.782578   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:42.782631   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:42.808289   38063 cri.go:89] found id: ""
	I1003 18:15:42.808307   38063 logs.go:282] 0 containers: []
	W1003 18:15:42.808315   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:42.808321   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:42.808362   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:42.832106   38063 cri.go:89] found id: ""
	I1003 18:15:42.832120   38063 logs.go:282] 0 containers: []
	W1003 18:15:42.832126   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:42.832136   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:42.832178   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:42.856681   38063 cri.go:89] found id: ""
	I1003 18:15:42.856697   38063 logs.go:282] 0 containers: []
	W1003 18:15:42.856704   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:42.856708   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:42.856753   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:42.880778   38063 cri.go:89] found id: ""
	I1003 18:15:42.880793   38063 logs.go:282] 0 containers: []
	W1003 18:15:42.880799   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:42.880806   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:42.880815   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:42.891568   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:42.891591   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:42.944856   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:42.938479    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:42.938960    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:42.940463    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:42.940834    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:42.942358    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:42.938479    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:42.938960    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:42.940463    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:42.940834    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:42.942358    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:15:42.944869   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:42.944883   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:43.008325   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:43.008342   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:43.034919   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:43.034934   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:45.601892   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:45.612293   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:45.612337   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:45.636800   38063 cri.go:89] found id: ""
	I1003 18:15:45.636816   38063 logs.go:282] 0 containers: []
	W1003 18:15:45.636825   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:45.636831   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:45.636897   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:45.663419   38063 cri.go:89] found id: ""
	I1003 18:15:45.663431   38063 logs.go:282] 0 containers: []
	W1003 18:15:45.663442   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:45.663446   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:45.663484   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:45.688326   38063 cri.go:89] found id: ""
	I1003 18:15:45.688340   38063 logs.go:282] 0 containers: []
	W1003 18:15:45.688346   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:45.688350   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:45.688390   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:45.713903   38063 cri.go:89] found id: ""
	I1003 18:15:45.713916   38063 logs.go:282] 0 containers: []
	W1003 18:15:45.713923   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:45.713929   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:45.713969   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:45.738540   38063 cri.go:89] found id: ""
	I1003 18:15:45.738554   38063 logs.go:282] 0 containers: []
	W1003 18:15:45.738560   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:45.738565   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:45.738626   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:45.763029   38063 cri.go:89] found id: ""
	I1003 18:15:45.763042   38063 logs.go:282] 0 containers: []
	W1003 18:15:45.763049   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:45.763054   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:45.763105   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:45.787593   38063 cri.go:89] found id: ""
	I1003 18:15:45.787605   38063 logs.go:282] 0 containers: []
	W1003 18:15:45.787613   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:45.787619   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:45.787628   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:45.814410   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:45.814426   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:45.879690   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:45.879708   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:45.890632   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:45.890646   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:45.945900   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:45.939503    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:45.940097    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:45.941591    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:45.942022    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:45.943469    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:45.939503    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:45.940097    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:45.941591    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:45.942022    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:45.943469    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:15:45.945911   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:45.945920   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:48.510685   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:48.520989   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:48.521030   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:48.545850   38063 cri.go:89] found id: ""
	I1003 18:15:48.545863   38063 logs.go:282] 0 containers: []
	W1003 18:15:48.545871   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:48.545875   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:48.545917   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:48.570678   38063 cri.go:89] found id: ""
	I1003 18:15:48.570691   38063 logs.go:282] 0 containers: []
	W1003 18:15:48.570699   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:48.570704   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:48.570758   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:48.594906   38063 cri.go:89] found id: ""
	I1003 18:15:48.594922   38063 logs.go:282] 0 containers: []
	W1003 18:15:48.594931   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:48.594936   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:48.595011   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:48.620934   38063 cri.go:89] found id: ""
	I1003 18:15:48.620951   38063 logs.go:282] 0 containers: []
	W1003 18:15:48.620958   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:48.620963   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:48.621033   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:48.645916   38063 cri.go:89] found id: ""
	I1003 18:15:48.645933   38063 logs.go:282] 0 containers: []
	W1003 18:15:48.645942   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:48.645947   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:48.646009   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:48.670919   38063 cri.go:89] found id: ""
	I1003 18:15:48.670932   38063 logs.go:282] 0 containers: []
	W1003 18:15:48.670939   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:48.670944   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:48.671004   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:48.695257   38063 cri.go:89] found id: ""
	I1003 18:15:48.695274   38063 logs.go:282] 0 containers: []
	W1003 18:15:48.695281   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:48.695289   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:48.695298   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:48.723183   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:48.723198   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:48.790906   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:48.790924   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:48.802517   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:48.802531   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:48.858274   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:48.851795    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:48.852286    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:48.853794    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:48.854187    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:48.855729    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:48.851795    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:48.852286    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:48.853794    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:48.854187    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:48.855729    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:15:48.858294   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:48.858309   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:51.418365   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:51.428790   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:51.428851   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:51.453214   38063 cri.go:89] found id: ""
	I1003 18:15:51.453228   38063 logs.go:282] 0 containers: []
	W1003 18:15:51.453235   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:51.453241   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:51.453302   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:51.478216   38063 cri.go:89] found id: ""
	I1003 18:15:51.478231   38063 logs.go:282] 0 containers: []
	W1003 18:15:51.478241   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:51.478247   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:51.478298   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:51.503301   38063 cri.go:89] found id: ""
	I1003 18:15:51.503316   38063 logs.go:282] 0 containers: []
	W1003 18:15:51.503322   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:51.503327   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:51.503368   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:51.528130   38063 cri.go:89] found id: ""
	I1003 18:15:51.528146   38063 logs.go:282] 0 containers: []
	W1003 18:15:51.528152   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:51.528157   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:51.528196   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:51.553046   38063 cri.go:89] found id: ""
	I1003 18:15:51.553076   38063 logs.go:282] 0 containers: []
	W1003 18:15:51.553084   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:51.553091   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:51.553133   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:51.577406   38063 cri.go:89] found id: ""
	I1003 18:15:51.577420   38063 logs.go:282] 0 containers: []
	W1003 18:15:51.577426   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:51.577432   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:51.577471   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:51.602068   38063 cri.go:89] found id: ""
	I1003 18:15:51.602084   38063 logs.go:282] 0 containers: []
	W1003 18:15:51.602092   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:51.602102   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:51.602114   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:51.629035   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:51.629051   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:51.697997   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:51.698016   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:51.710748   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:51.710769   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:51.764330   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:51.757745    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:51.758298    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:51.759850    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:51.760310    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:51.761740    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:51.757745    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:51.758298    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:51.759850    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:51.760310    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:51.761740    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:15:51.764338   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:51.764348   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:54.323078   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:54.333510   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:54.333559   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:54.357777   38063 cri.go:89] found id: ""
	I1003 18:15:54.357790   38063 logs.go:282] 0 containers: []
	W1003 18:15:54.357796   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:54.357800   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:54.357841   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:54.381421   38063 cri.go:89] found id: ""
	I1003 18:15:54.381435   38063 logs.go:282] 0 containers: []
	W1003 18:15:54.381442   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:54.381447   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:54.381495   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:54.404951   38063 cri.go:89] found id: ""
	I1003 18:15:54.404969   38063 logs.go:282] 0 containers: []
	W1003 18:15:54.404991   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:54.404999   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:54.405045   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:54.429154   38063 cri.go:89] found id: ""
	I1003 18:15:54.429172   38063 logs.go:282] 0 containers: []
	W1003 18:15:54.429181   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:54.429186   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:54.429224   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:54.452874   38063 cri.go:89] found id: ""
	I1003 18:15:54.452895   38063 logs.go:282] 0 containers: []
	W1003 18:15:54.452903   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:54.452907   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:54.452946   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:54.477916   38063 cri.go:89] found id: ""
	I1003 18:15:54.477929   38063 logs.go:282] 0 containers: []
	W1003 18:15:54.477937   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:54.477942   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:54.478001   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:54.503676   38063 cri.go:89] found id: ""
	I1003 18:15:54.503692   38063 logs.go:282] 0 containers: []
	W1003 18:15:54.503699   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:54.503706   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:54.503716   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:54.571451   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:54.571469   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:54.582598   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:54.582614   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:54.635288   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:54.629106    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:54.629524    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:54.631026    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:54.631408    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:54.632845    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:54.629106    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:54.629524    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:54.631026    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:54.631408    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:54.632845    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:15:54.635301   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:54.635338   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:54.693328   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:54.693348   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:57.224616   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:57.234873   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:57.234916   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:57.259150   38063 cri.go:89] found id: ""
	I1003 18:15:57.259164   38063 logs.go:282] 0 containers: []
	W1003 18:15:57.259170   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:57.259175   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:57.259224   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:57.282636   38063 cri.go:89] found id: ""
	I1003 18:15:57.282650   38063 logs.go:282] 0 containers: []
	W1003 18:15:57.282662   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:57.282667   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:57.282716   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:57.307774   38063 cri.go:89] found id: ""
	I1003 18:15:57.307792   38063 logs.go:282] 0 containers: []
	W1003 18:15:57.307800   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:57.307806   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:57.307846   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:57.331087   38063 cri.go:89] found id: ""
	I1003 18:15:57.331101   38063 logs.go:282] 0 containers: []
	W1003 18:15:57.331107   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:57.331112   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:57.331153   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:57.356108   38063 cri.go:89] found id: ""
	I1003 18:15:57.356125   38063 logs.go:282] 0 containers: []
	W1003 18:15:57.356200   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:57.356209   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:57.356267   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:57.381138   38063 cri.go:89] found id: ""
	I1003 18:15:57.381154   38063 logs.go:282] 0 containers: []
	W1003 18:15:57.381161   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:57.381166   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:57.381206   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:57.405322   38063 cri.go:89] found id: ""
	I1003 18:15:57.405339   38063 logs.go:282] 0 containers: []
	W1003 18:15:57.405345   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:57.405353   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:57.405362   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:57.463330   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:57.463345   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:57.491754   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:57.491771   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:57.557710   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:57.557727   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:57.569135   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:57.569150   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:57.622275   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:57.615880    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:57.616369    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:57.617874    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:57.618325    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:57.619768    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:57.615880    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:57.616369    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:57.617874    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:57.618325    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:57.619768    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:00.123157   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:00.133350   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:00.133393   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:00.157946   38063 cri.go:89] found id: ""
	I1003 18:16:00.157958   38063 logs.go:282] 0 containers: []
	W1003 18:16:00.157965   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:00.157970   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:00.158035   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:00.182943   38063 cri.go:89] found id: ""
	I1003 18:16:00.182956   38063 logs.go:282] 0 containers: []
	W1003 18:16:00.182962   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:00.182967   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:00.183026   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:00.206834   38063 cri.go:89] found id: ""
	I1003 18:16:00.206848   38063 logs.go:282] 0 containers: []
	W1003 18:16:00.206854   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:00.206858   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:00.206901   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:00.231944   38063 cri.go:89] found id: ""
	I1003 18:16:00.231959   38063 logs.go:282] 0 containers: []
	W1003 18:16:00.231965   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:00.231970   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:00.232027   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:00.257587   38063 cri.go:89] found id: ""
	I1003 18:16:00.257607   38063 logs.go:282] 0 containers: []
	W1003 18:16:00.257613   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:00.257619   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:00.257662   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:00.281667   38063 cri.go:89] found id: ""
	I1003 18:16:00.281683   38063 logs.go:282] 0 containers: []
	W1003 18:16:00.281690   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:00.281694   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:00.281735   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:00.306161   38063 cri.go:89] found id: ""
	I1003 18:16:00.306173   38063 logs.go:282] 0 containers: []
	W1003 18:16:00.306183   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:00.306189   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:00.306199   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:00.334078   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:00.334094   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:00.398782   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:00.398800   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:00.410100   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:00.410118   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:00.464563   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:00.458004    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:00.458485    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:00.459956    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:00.460373    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:00.461844    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:00.458004    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:00.458485    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:00.459956    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:00.460373    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:00.461844    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:00.464573   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:00.464584   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:03.025201   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:03.035449   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:03.035489   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:03.060615   38063 cri.go:89] found id: ""
	I1003 18:16:03.060629   38063 logs.go:282] 0 containers: []
	W1003 18:16:03.060638   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:03.060644   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:03.060695   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:03.085028   38063 cri.go:89] found id: ""
	I1003 18:16:03.085041   38063 logs.go:282] 0 containers: []
	W1003 18:16:03.085047   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:03.085052   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:03.085101   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:03.109281   38063 cri.go:89] found id: ""
	I1003 18:16:03.109295   38063 logs.go:282] 0 containers: []
	W1003 18:16:03.109301   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:03.109306   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:03.109343   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:03.133199   38063 cri.go:89] found id: ""
	I1003 18:16:03.133212   38063 logs.go:282] 0 containers: []
	W1003 18:16:03.133218   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:03.133223   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:03.133271   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:03.157142   38063 cri.go:89] found id: ""
	I1003 18:16:03.157158   38063 logs.go:282] 0 containers: []
	W1003 18:16:03.157167   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:03.157174   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:03.157215   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:03.181156   38063 cri.go:89] found id: ""
	I1003 18:16:03.181170   38063 logs.go:282] 0 containers: []
	W1003 18:16:03.181177   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:03.181182   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:03.181225   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:03.207371   38063 cri.go:89] found id: ""
	I1003 18:16:03.207385   38063 logs.go:282] 0 containers: []
	W1003 18:16:03.207392   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:03.207399   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:03.207407   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:03.268072   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:03.268093   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:03.295655   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:03.295675   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:03.359095   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:03.359116   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:03.370093   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:03.370110   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:03.423681   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:03.416458    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:03.416947    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:03.419089    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:03.419495    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:03.421012    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:03.416458    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:03.416947    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:03.419089    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:03.419495    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:03.421012    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:05.925327   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:05.935882   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:05.935927   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:05.960833   38063 cri.go:89] found id: ""
	I1003 18:16:05.960850   38063 logs.go:282] 0 containers: []
	W1003 18:16:05.960858   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:05.960864   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:05.960918   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:05.985562   38063 cri.go:89] found id: ""
	I1003 18:16:05.985577   38063 logs.go:282] 0 containers: []
	W1003 18:16:05.985585   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:05.985592   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:05.985644   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:06.008796   38063 cri.go:89] found id: ""
	I1003 18:16:06.008813   38063 logs.go:282] 0 containers: []
	W1003 18:16:06.008822   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:06.008827   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:06.008865   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:06.034023   38063 cri.go:89] found id: ""
	I1003 18:16:06.034037   38063 logs.go:282] 0 containers: []
	W1003 18:16:06.034043   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:06.034048   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:06.034099   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:06.057314   38063 cri.go:89] found id: ""
	I1003 18:16:06.057330   38063 logs.go:282] 0 containers: []
	W1003 18:16:06.057340   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:06.057347   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:06.057396   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:06.082843   38063 cri.go:89] found id: ""
	I1003 18:16:06.082859   38063 logs.go:282] 0 containers: []
	W1003 18:16:06.082865   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:06.082870   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:06.082921   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:06.106237   38063 cri.go:89] found id: ""
	I1003 18:16:06.106251   38063 logs.go:282] 0 containers: []
	W1003 18:16:06.106257   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:06.106264   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:06.106276   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:06.175390   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:06.175407   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:06.186550   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:06.186565   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:06.239490   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:06.233165    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:06.233624    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:06.235128    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:06.235537    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:06.237048    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:06.233165    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:06.233624    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:06.235128    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:06.235537    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:06.237048    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:06.239500   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:06.239513   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:06.301454   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:06.301474   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:08.830757   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:08.841156   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:08.841199   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:08.865562   38063 cri.go:89] found id: ""
	I1003 18:16:08.865578   38063 logs.go:282] 0 containers: []
	W1003 18:16:08.865584   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:08.865589   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:08.865636   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:08.889510   38063 cri.go:89] found id: ""
	I1003 18:16:08.889527   38063 logs.go:282] 0 containers: []
	W1003 18:16:08.889536   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:08.889543   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:08.889588   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:08.914125   38063 cri.go:89] found id: ""
	I1003 18:16:08.914140   38063 logs.go:282] 0 containers: []
	W1003 18:16:08.914146   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:08.914150   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:08.914195   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:08.937681   38063 cri.go:89] found id: ""
	I1003 18:16:08.937697   38063 logs.go:282] 0 containers: []
	W1003 18:16:08.937706   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:08.937711   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:08.937752   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:08.961970   38063 cri.go:89] found id: ""
	I1003 18:16:08.961998   38063 logs.go:282] 0 containers: []
	W1003 18:16:08.962006   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:08.962012   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:08.962073   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:08.986853   38063 cri.go:89] found id: ""
	I1003 18:16:08.986870   38063 logs.go:282] 0 containers: []
	W1003 18:16:08.986877   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:08.986883   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:08.986953   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:09.012531   38063 cri.go:89] found id: ""
	I1003 18:16:09.012547   38063 logs.go:282] 0 containers: []
	W1003 18:16:09.012555   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:09.012570   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:09.012581   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:09.078036   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:09.078053   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:09.088904   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:09.088918   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:09.143252   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:09.136367    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:09.136907    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:09.138514    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:09.139001    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:09.140648    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:09.136367    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:09.136907    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:09.138514    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:09.139001    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:09.140648    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:09.143263   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:09.143275   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:09.201869   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:09.201887   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:11.730105   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:11.740344   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:11.740384   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:11.765234   38063 cri.go:89] found id: ""
	I1003 18:16:11.765247   38063 logs.go:282] 0 containers: []
	W1003 18:16:11.765256   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:11.765261   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:11.765318   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:11.789130   38063 cri.go:89] found id: ""
	I1003 18:16:11.789143   38063 logs.go:282] 0 containers: []
	W1003 18:16:11.789149   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:11.789154   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:11.789198   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:11.815036   38063 cri.go:89] found id: ""
	I1003 18:16:11.815050   38063 logs.go:282] 0 containers: []
	W1003 18:16:11.815058   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:11.815064   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:11.815113   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:11.839467   38063 cri.go:89] found id: ""
	I1003 18:16:11.839483   38063 logs.go:282] 0 containers: []
	W1003 18:16:11.839490   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:11.839495   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:11.839539   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:11.863864   38063 cri.go:89] found id: ""
	I1003 18:16:11.863893   38063 logs.go:282] 0 containers: []
	W1003 18:16:11.863899   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:11.863904   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:11.863955   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:11.889464   38063 cri.go:89] found id: ""
	I1003 18:16:11.889480   38063 logs.go:282] 0 containers: []
	W1003 18:16:11.889488   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:11.889495   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:11.889535   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:11.912845   38063 cri.go:89] found id: ""
	I1003 18:16:11.912862   38063 logs.go:282] 0 containers: []
	W1003 18:16:11.912870   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:11.912880   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:11.912904   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:11.966773   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:11.959444    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:11.960161    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:11.961014    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:11.962530    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:11.962898    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:11.959444    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:11.960161    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:11.961014    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:11.962530    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:11.962898    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:11.966785   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:11.966795   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:12.025128   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:12.025146   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:12.053945   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:12.053960   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:12.119420   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:12.119438   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:14.631092   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:14.641283   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:14.641330   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:14.665808   38063 cri.go:89] found id: ""
	I1003 18:16:14.665821   38063 logs.go:282] 0 containers: []
	W1003 18:16:14.665827   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:14.665832   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:14.665874   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:14.690191   38063 cri.go:89] found id: ""
	I1003 18:16:14.690204   38063 logs.go:282] 0 containers: []
	W1003 18:16:14.690211   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:14.690216   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:14.690266   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:14.715586   38063 cri.go:89] found id: ""
	I1003 18:16:14.715598   38063 logs.go:282] 0 containers: []
	W1003 18:16:14.715619   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:14.715623   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:14.715677   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:14.740173   38063 cri.go:89] found id: ""
	I1003 18:16:14.740190   38063 logs.go:282] 0 containers: []
	W1003 18:16:14.740198   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:14.740202   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:14.740247   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:14.764574   38063 cri.go:89] found id: ""
	I1003 18:16:14.764589   38063 logs.go:282] 0 containers: []
	W1003 18:16:14.764595   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:14.764599   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:14.764653   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:14.788993   38063 cri.go:89] found id: ""
	I1003 18:16:14.789007   38063 logs.go:282] 0 containers: []
	W1003 18:16:14.789014   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:14.789018   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:14.789059   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:14.813679   38063 cri.go:89] found id: ""
	I1003 18:16:14.813692   38063 logs.go:282] 0 containers: []
	W1003 18:16:14.813699   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:14.813706   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:14.813715   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:14.840363   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:14.840378   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:14.906264   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:14.906280   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:14.917237   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:14.917251   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:14.971230   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:14.964471    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:14.965000    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:14.966522    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:14.966918    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:14.968491    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:14.964471    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:14.965000    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:14.966522    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:14.966918    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:14.968491    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:14.971246   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:14.971257   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:17.534133   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:17.544453   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:17.544502   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:17.568816   38063 cri.go:89] found id: ""
	I1003 18:16:17.568834   38063 logs.go:282] 0 containers: []
	W1003 18:16:17.568841   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:17.568847   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:17.568899   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:17.593442   38063 cri.go:89] found id: ""
	I1003 18:16:17.593460   38063 logs.go:282] 0 containers: []
	W1003 18:16:17.593466   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:17.593472   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:17.593515   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:17.617737   38063 cri.go:89] found id: ""
	I1003 18:16:17.617754   38063 logs.go:282] 0 containers: []
	W1003 18:16:17.617761   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:17.617766   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:17.617804   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:17.642180   38063 cri.go:89] found id: ""
	I1003 18:16:17.642194   38063 logs.go:282] 0 containers: []
	W1003 18:16:17.642201   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:17.642206   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:17.642250   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:17.666189   38063 cri.go:89] found id: ""
	I1003 18:16:17.666204   38063 logs.go:282] 0 containers: []
	W1003 18:16:17.666210   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:17.666214   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:17.666259   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:17.689273   38063 cri.go:89] found id: ""
	I1003 18:16:17.689289   38063 logs.go:282] 0 containers: []
	W1003 18:16:17.689297   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:17.689305   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:17.689345   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:17.714353   38063 cri.go:89] found id: ""
	I1003 18:16:17.714373   38063 logs.go:282] 0 containers: []
	W1003 18:16:17.714381   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:17.714394   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:17.714407   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:17.768746   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:17.762135    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:17.762597    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:17.764136    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:17.764533    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:17.766023    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:17.762135    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:17.762597    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:17.764136    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:17.764533    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:17.766023    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:17.768759   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:17.768768   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:17.830139   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:17.830159   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:17.858326   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:17.858342   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:17.922889   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:17.922911   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:20.435863   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:20.446321   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:20.446361   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:20.471731   38063 cri.go:89] found id: ""
	I1003 18:16:20.471743   38063 logs.go:282] 0 containers: []
	W1003 18:16:20.471749   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:20.471753   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:20.471792   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:20.495730   38063 cri.go:89] found id: ""
	I1003 18:16:20.495747   38063 logs.go:282] 0 containers: []
	W1003 18:16:20.495755   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:20.495760   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:20.495815   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:20.520555   38063 cri.go:89] found id: ""
	I1003 18:16:20.520572   38063 logs.go:282] 0 containers: []
	W1003 18:16:20.520581   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:20.520597   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:20.520650   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:20.545197   38063 cri.go:89] found id: ""
	I1003 18:16:20.545210   38063 logs.go:282] 0 containers: []
	W1003 18:16:20.545216   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:20.545220   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:20.545258   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:20.569113   38063 cri.go:89] found id: ""
	I1003 18:16:20.569126   38063 logs.go:282] 0 containers: []
	W1003 18:16:20.569132   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:20.569138   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:20.569189   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:20.593468   38063 cri.go:89] found id: ""
	I1003 18:16:20.593483   38063 logs.go:282] 0 containers: []
	W1003 18:16:20.593491   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:20.593496   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:20.593545   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:20.617852   38063 cri.go:89] found id: ""
	I1003 18:16:20.617865   38063 logs.go:282] 0 containers: []
	W1003 18:16:20.617872   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:20.617878   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:20.617887   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:20.680360   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:20.680379   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:20.691258   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:20.691271   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:20.745174   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:20.738655    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:20.739179    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:20.740672    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:20.741122    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:20.742610    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:20.738655    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:20.739179    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:20.740672    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:20.741122    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:20.742610    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:20.745187   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:20.745197   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:20.806835   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:20.806853   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:23.335788   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:23.346440   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:23.346505   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:23.371250   38063 cri.go:89] found id: ""
	I1003 18:16:23.371263   38063 logs.go:282] 0 containers: []
	W1003 18:16:23.371269   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:23.371273   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:23.371315   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:23.396570   38063 cri.go:89] found id: ""
	I1003 18:16:23.396585   38063 logs.go:282] 0 containers: []
	W1003 18:16:23.396592   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:23.396596   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:23.396646   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:23.420703   38063 cri.go:89] found id: ""
	I1003 18:16:23.420718   38063 logs.go:282] 0 containers: []
	W1003 18:16:23.420728   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:23.420735   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:23.420783   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:23.445294   38063 cri.go:89] found id: ""
	I1003 18:16:23.445310   38063 logs.go:282] 0 containers: []
	W1003 18:16:23.445319   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:23.445326   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:23.445372   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:23.470082   38063 cri.go:89] found id: ""
	I1003 18:16:23.470100   38063 logs.go:282] 0 containers: []
	W1003 18:16:23.470106   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:23.470110   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:23.470148   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:23.494417   38063 cri.go:89] found id: ""
	I1003 18:16:23.494432   38063 logs.go:282] 0 containers: []
	W1003 18:16:23.494441   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:23.494446   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:23.494489   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:23.519492   38063 cri.go:89] found id: ""
	I1003 18:16:23.519507   38063 logs.go:282] 0 containers: []
	W1003 18:16:23.519516   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:23.519526   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:23.519538   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:23.583328   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:23.583346   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:23.594696   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:23.594710   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:23.649094   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:23.642344    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:23.642882    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:23.644368    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:23.644805    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:23.646275    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:23.642344    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:23.642882    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:23.644368    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:23.644805    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:23.646275    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:23.649104   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:23.649113   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:23.710665   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:23.710684   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:26.239439   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:26.250313   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:26.250355   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:26.275460   38063 cri.go:89] found id: ""
	I1003 18:16:26.275476   38063 logs.go:282] 0 containers: []
	W1003 18:16:26.275484   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:26.275490   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:26.275544   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:26.300685   38063 cri.go:89] found id: ""
	I1003 18:16:26.300701   38063 logs.go:282] 0 containers: []
	W1003 18:16:26.300710   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:26.300716   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:26.300760   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:26.324124   38063 cri.go:89] found id: ""
	I1003 18:16:26.324141   38063 logs.go:282] 0 containers: []
	W1003 18:16:26.324150   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:26.324156   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:26.324203   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:26.349331   38063 cri.go:89] found id: ""
	I1003 18:16:26.349348   38063 logs.go:282] 0 containers: []
	W1003 18:16:26.349357   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:26.349363   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:26.349407   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:26.373924   38063 cri.go:89] found id: ""
	I1003 18:16:26.373938   38063 logs.go:282] 0 containers: []
	W1003 18:16:26.373944   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:26.373948   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:26.374020   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:26.398561   38063 cri.go:89] found id: ""
	I1003 18:16:26.398575   38063 logs.go:282] 0 containers: []
	W1003 18:16:26.398581   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:26.398593   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:26.398637   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:26.423043   38063 cri.go:89] found id: ""
	I1003 18:16:26.423055   38063 logs.go:282] 0 containers: []
	W1003 18:16:26.423064   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:26.423073   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:26.423085   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:26.448940   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:26.448957   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:26.514345   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:26.514362   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:26.525206   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:26.525218   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:26.579573   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:26.572848    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:26.573316    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:26.574821    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:26.575280    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:26.576738    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:26.572848    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:26.573316    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:26.574821    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:26.575280    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:26.576738    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:26.579590   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:26.579599   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:29.139399   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:29.149491   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:29.149546   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:29.174745   38063 cri.go:89] found id: ""
	I1003 18:16:29.174759   38063 logs.go:282] 0 containers: []
	W1003 18:16:29.174764   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:29.174769   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:29.174809   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:29.199728   38063 cri.go:89] found id: ""
	I1003 18:16:29.199741   38063 logs.go:282] 0 containers: []
	W1003 18:16:29.199747   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:29.199752   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:29.199803   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:29.225114   38063 cri.go:89] found id: ""
	I1003 18:16:29.225130   38063 logs.go:282] 0 containers: []
	W1003 18:16:29.225139   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:29.225145   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:29.225208   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:29.249942   38063 cri.go:89] found id: ""
	I1003 18:16:29.249959   38063 logs.go:282] 0 containers: []
	W1003 18:16:29.249968   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:29.249990   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:29.250054   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:29.274658   38063 cri.go:89] found id: ""
	I1003 18:16:29.274676   38063 logs.go:282] 0 containers: []
	W1003 18:16:29.274684   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:29.274690   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:29.274740   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:29.299132   38063 cri.go:89] found id: ""
	I1003 18:16:29.299147   38063 logs.go:282] 0 containers: []
	W1003 18:16:29.299153   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:29.299159   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:29.299207   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:29.323399   38063 cri.go:89] found id: ""
	I1003 18:16:29.323414   38063 logs.go:282] 0 containers: []
	W1003 18:16:29.323420   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:29.323427   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:29.323436   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:29.388896   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:29.388919   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:29.400252   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:29.400267   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:29.453553   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:29.447303    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:29.447746    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:29.449289    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:29.449640    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:29.451133    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:29.447303    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:29.447746    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:29.449289    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:29.449640    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:29.451133    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:29.453604   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:29.453615   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:29.515234   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:29.515257   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:32.045106   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:32.055516   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:32.055563   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:32.081412   38063 cri.go:89] found id: ""
	I1003 18:16:32.081425   38063 logs.go:282] 0 containers: []
	W1003 18:16:32.081431   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:32.081436   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:32.081476   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:32.106569   38063 cri.go:89] found id: ""
	I1003 18:16:32.106585   38063 logs.go:282] 0 containers: []
	W1003 18:16:32.106591   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:32.106595   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:32.106634   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:32.131668   38063 cri.go:89] found id: ""
	I1003 18:16:32.131684   38063 logs.go:282] 0 containers: []
	W1003 18:16:32.131692   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:32.131699   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:32.131745   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:32.156465   38063 cri.go:89] found id: ""
	I1003 18:16:32.156479   38063 logs.go:282] 0 containers: []
	W1003 18:16:32.156485   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:32.156490   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:32.156566   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:32.181247   38063 cri.go:89] found id: ""
	I1003 18:16:32.181260   38063 logs.go:282] 0 containers: []
	W1003 18:16:32.181267   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:32.181271   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:32.181314   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:32.205219   38063 cri.go:89] found id: ""
	I1003 18:16:32.205236   38063 logs.go:282] 0 containers: []
	W1003 18:16:32.205245   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:32.205252   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:32.205305   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:32.229751   38063 cri.go:89] found id: ""
	I1003 18:16:32.229767   38063 logs.go:282] 0 containers: []
	W1003 18:16:32.229776   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:32.229785   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:32.229797   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:32.257251   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:32.257266   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:32.325308   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:32.325326   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:32.336569   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:32.336584   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:32.391680   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:32.384542    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:32.385163    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:32.386741    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:32.387204    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:32.388820    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:32.384542    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:32.385163    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:32.386741    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:32.387204    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:32.388820    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:32.391693   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:32.391706   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:34.954303   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:34.965018   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:34.965070   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:34.990955   38063 cri.go:89] found id: ""
	I1003 18:16:34.990970   38063 logs.go:282] 0 containers: []
	W1003 18:16:34.990992   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:34.990999   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:34.991061   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:35.015676   38063 cri.go:89] found id: ""
	I1003 18:16:35.015689   38063 logs.go:282] 0 containers: []
	W1003 18:16:35.015695   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:35.015699   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:35.015737   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:35.040155   38063 cri.go:89] found id: ""
	I1003 18:16:35.040168   38063 logs.go:282] 0 containers: []
	W1003 18:16:35.040174   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:35.040179   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:35.040218   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:35.065569   38063 cri.go:89] found id: ""
	I1003 18:16:35.065587   38063 logs.go:282] 0 containers: []
	W1003 18:16:35.065596   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:35.065602   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:35.065663   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:35.090276   38063 cri.go:89] found id: ""
	I1003 18:16:35.090288   38063 logs.go:282] 0 containers: []
	W1003 18:16:35.090295   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:35.090299   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:35.090339   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:35.114581   38063 cri.go:89] found id: ""
	I1003 18:16:35.114617   38063 logs.go:282] 0 containers: []
	W1003 18:16:35.114627   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:35.114633   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:35.114688   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:35.139719   38063 cri.go:89] found id: ""
	I1003 18:16:35.139734   38063 logs.go:282] 0 containers: []
	W1003 18:16:35.139744   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:35.139753   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:35.139766   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:35.205015   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:35.205034   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:35.216021   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:35.216039   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:35.269655   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:35.262830    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:35.263341    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:35.264897    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:35.265346    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:35.266885    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:35.262830    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:35.263341    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:35.264897    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:35.265346    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:35.266885    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:35.269664   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:35.269674   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:35.330604   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:35.330634   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:37.861503   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:37.871534   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:37.871641   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:37.895946   38063 cri.go:89] found id: ""
	I1003 18:16:37.895961   38063 logs.go:282] 0 containers: []
	W1003 18:16:37.895971   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:37.895995   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:37.896048   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:37.921286   38063 cri.go:89] found id: ""
	I1003 18:16:37.921301   38063 logs.go:282] 0 containers: []
	W1003 18:16:37.921308   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:37.921314   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:37.921364   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:37.946115   38063 cri.go:89] found id: ""
	I1003 18:16:37.946131   38063 logs.go:282] 0 containers: []
	W1003 18:16:37.946141   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:37.946148   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:37.946194   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:37.970857   38063 cri.go:89] found id: ""
	I1003 18:16:37.970871   38063 logs.go:282] 0 containers: []
	W1003 18:16:37.970878   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:37.970882   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:37.970930   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:37.997387   38063 cri.go:89] found id: ""
	I1003 18:16:37.997405   38063 logs.go:282] 0 containers: []
	W1003 18:16:37.997412   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:37.997416   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:37.997459   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:38.022848   38063 cri.go:89] found id: ""
	I1003 18:16:38.022862   38063 logs.go:282] 0 containers: []
	W1003 18:16:38.022869   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:38.022874   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:38.022938   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:38.048588   38063 cri.go:89] found id: ""
	I1003 18:16:38.048624   38063 logs.go:282] 0 containers: []
	W1003 18:16:38.048632   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:38.048640   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:38.048653   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:38.110031   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:38.110050   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:38.137498   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:38.137513   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:38.203958   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:38.203994   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:38.215727   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:38.215744   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:38.269765   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:38.263066    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:38.263531    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:38.265220    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:38.265597    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:38.267129    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:38.263066    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:38.263531    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:38.265220    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:38.265597    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:38.267129    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:40.770413   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:40.780831   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:40.780874   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:40.804826   38063 cri.go:89] found id: ""
	I1003 18:16:40.804839   38063 logs.go:282] 0 containers: []
	W1003 18:16:40.804845   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:40.804850   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:40.804890   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:40.830833   38063 cri.go:89] found id: ""
	I1003 18:16:40.830850   38063 logs.go:282] 0 containers: []
	W1003 18:16:40.830858   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:40.830864   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:40.830930   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:40.856650   38063 cri.go:89] found id: ""
	I1003 18:16:40.856669   38063 logs.go:282] 0 containers: []
	W1003 18:16:40.856677   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:40.856693   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:40.856748   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:40.881236   38063 cri.go:89] found id: ""
	I1003 18:16:40.881250   38063 logs.go:282] 0 containers: []
	W1003 18:16:40.881256   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:40.881261   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:40.881301   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:40.905820   38063 cri.go:89] found id: ""
	I1003 18:16:40.905836   38063 logs.go:282] 0 containers: []
	W1003 18:16:40.905843   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:40.905849   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:40.905900   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:40.931504   38063 cri.go:89] found id: ""
	I1003 18:16:40.931520   38063 logs.go:282] 0 containers: []
	W1003 18:16:40.931527   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:40.931532   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:40.931583   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:40.957539   38063 cri.go:89] found id: ""
	I1003 18:16:40.957553   38063 logs.go:282] 0 containers: []
	W1003 18:16:40.957560   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:40.957567   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:40.957578   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:41.015948   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:41.015969   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:41.044701   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:41.044726   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:41.112388   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:41.112406   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:41.123384   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:41.123399   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:41.177789   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:41.171080    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:41.171701    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:41.173280    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:41.173749    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:41.175246    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:41.171080    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:41.171701    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:41.173280    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:41.173749    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:41.175246    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:43.679496   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:43.689800   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:43.689843   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:43.714130   38063 cri.go:89] found id: ""
	I1003 18:16:43.714145   38063 logs.go:282] 0 containers: []
	W1003 18:16:43.714152   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:43.714156   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:43.714197   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:43.738900   38063 cri.go:89] found id: ""
	I1003 18:16:43.738916   38063 logs.go:282] 0 containers: []
	W1003 18:16:43.738924   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:43.738929   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:43.738972   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:43.763822   38063 cri.go:89] found id: ""
	I1003 18:16:43.763835   38063 logs.go:282] 0 containers: []
	W1003 18:16:43.763841   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:43.763845   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:43.763884   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:43.789103   38063 cri.go:89] found id: ""
	I1003 18:16:43.789120   38063 logs.go:282] 0 containers: []
	W1003 18:16:43.789128   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:43.789134   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:43.789187   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:43.813436   38063 cri.go:89] found id: ""
	I1003 18:16:43.813447   38063 logs.go:282] 0 containers: []
	W1003 18:16:43.813455   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:43.813460   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:43.813513   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:43.838306   38063 cri.go:89] found id: ""
	I1003 18:16:43.838322   38063 logs.go:282] 0 containers: []
	W1003 18:16:43.838331   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:43.838338   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:43.838382   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:43.863413   38063 cri.go:89] found id: ""
	I1003 18:16:43.863429   38063 logs.go:282] 0 containers: []
	W1003 18:16:43.863435   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:43.863442   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:43.863451   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:43.931299   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:43.931317   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:43.942307   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:43.942321   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:43.997476   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:43.990626    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:43.991191    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:43.992711    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:43.993154    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:43.994633    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:43.990626    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:43.991191    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:43.992711    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:43.993154    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:43.994633    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:43.997488   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:43.997500   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:44.053446   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:44.053464   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:46.583423   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:46.593663   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:46.593719   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:46.618188   38063 cri.go:89] found id: ""
	I1003 18:16:46.618202   38063 logs.go:282] 0 containers: []
	W1003 18:16:46.618208   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:46.618213   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:46.618250   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:46.642929   38063 cri.go:89] found id: ""
	I1003 18:16:46.642943   38063 logs.go:282] 0 containers: []
	W1003 18:16:46.642949   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:46.642954   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:46.643015   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:46.667745   38063 cri.go:89] found id: ""
	I1003 18:16:46.667761   38063 logs.go:282] 0 containers: []
	W1003 18:16:46.667770   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:46.667775   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:46.667818   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:46.692080   38063 cri.go:89] found id: ""
	I1003 18:16:46.692092   38063 logs.go:282] 0 containers: []
	W1003 18:16:46.692098   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:46.692102   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:46.692140   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:46.716789   38063 cri.go:89] found id: ""
	I1003 18:16:46.716807   38063 logs.go:282] 0 containers: []
	W1003 18:16:46.716816   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:46.716822   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:46.716867   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:46.741361   38063 cri.go:89] found id: ""
	I1003 18:16:46.741375   38063 logs.go:282] 0 containers: []
	W1003 18:16:46.741382   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:46.741389   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:46.741437   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:46.765330   38063 cri.go:89] found id: ""
	I1003 18:16:46.765343   38063 logs.go:282] 0 containers: []
	W1003 18:16:46.765349   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:46.765357   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:46.765368   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:46.830366   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:46.830385   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:46.841266   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:46.841279   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:46.894396   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:46.888072    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:46.888542    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:46.890079    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:46.890459    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:46.891950    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:46.888072    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:46.888542    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:46.890079    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:46.890459    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:46.891950    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:46.894415   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:46.894426   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:46.954277   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:46.954295   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:49.482413   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:49.492881   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:49.492921   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:49.516075   38063 cri.go:89] found id: ""
	I1003 18:16:49.516093   38063 logs.go:282] 0 containers: []
	W1003 18:16:49.516102   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:49.516108   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:49.516154   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:49.542911   38063 cri.go:89] found id: ""
	I1003 18:16:49.542928   38063 logs.go:282] 0 containers: []
	W1003 18:16:49.542936   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:49.542940   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:49.543006   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:49.568965   38063 cri.go:89] found id: ""
	I1003 18:16:49.568996   38063 logs.go:282] 0 containers: []
	W1003 18:16:49.569005   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:49.569009   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:49.569055   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:49.593221   38063 cri.go:89] found id: ""
	I1003 18:16:49.593238   38063 logs.go:282] 0 containers: []
	W1003 18:16:49.593246   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:49.593251   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:49.593302   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:49.618807   38063 cri.go:89] found id: ""
	I1003 18:16:49.618824   38063 logs.go:282] 0 containers: []
	W1003 18:16:49.618831   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:49.618848   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:49.618893   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:49.642342   38063 cri.go:89] found id: ""
	I1003 18:16:49.642357   38063 logs.go:282] 0 containers: []
	W1003 18:16:49.642363   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:49.642368   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:49.642407   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:49.666474   38063 cri.go:89] found id: ""
	I1003 18:16:49.666488   38063 logs.go:282] 0 containers: []
	W1003 18:16:49.666494   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:49.666502   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:49.666513   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:49.722457   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:49.722476   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:49.750153   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:49.750170   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:49.814369   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:49.814387   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:49.825405   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:49.825418   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:49.879924   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:49.873380    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:49.873871    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:49.875556    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:49.876003    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:49.877459    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:49.873380    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:49.873871    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:49.875556    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:49.876003    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:49.877459    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:52.380662   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:52.391022   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:52.391066   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:52.414399   38063 cri.go:89] found id: ""
	I1003 18:16:52.414416   38063 logs.go:282] 0 containers: []
	W1003 18:16:52.414423   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:52.414428   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:52.414466   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:52.438285   38063 cri.go:89] found id: ""
	I1003 18:16:52.438301   38063 logs.go:282] 0 containers: []
	W1003 18:16:52.438308   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:52.438312   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:52.438352   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:52.463204   38063 cri.go:89] found id: ""
	I1003 18:16:52.463218   38063 logs.go:282] 0 containers: []
	W1003 18:16:52.463224   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:52.463229   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:52.463271   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:52.487579   38063 cri.go:89] found id: ""
	I1003 18:16:52.487593   38063 logs.go:282] 0 containers: []
	W1003 18:16:52.487598   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:52.487605   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:52.487658   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:52.512643   38063 cri.go:89] found id: ""
	I1003 18:16:52.512657   38063 logs.go:282] 0 containers: []
	W1003 18:16:52.512663   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:52.512667   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:52.512705   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:52.538897   38063 cri.go:89] found id: ""
	I1003 18:16:52.538913   38063 logs.go:282] 0 containers: []
	W1003 18:16:52.538920   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:52.538926   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:52.538970   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:52.563277   38063 cri.go:89] found id: ""
	I1003 18:16:52.563294   38063 logs.go:282] 0 containers: []
	W1003 18:16:52.563302   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:52.563310   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:52.563321   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:52.622624   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:52.622642   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:52.650058   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:52.650074   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:52.714242   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:52.714261   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:52.725305   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:52.725319   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:52.777801   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:52.771320   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:52.772111   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:52.773166   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:52.773579   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:52.775090   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:52.771320   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:52.772111   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:52.773166   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:52.773579   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:52.775090   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:55.279440   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:55.290117   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:55.290161   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:55.315904   38063 cri.go:89] found id: ""
	I1003 18:16:55.315920   38063 logs.go:282] 0 containers: []
	W1003 18:16:55.315926   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:55.315930   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:55.315996   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:55.340568   38063 cri.go:89] found id: ""
	I1003 18:16:55.340582   38063 logs.go:282] 0 containers: []
	W1003 18:16:55.340588   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:55.340593   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:55.340631   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:55.365911   38063 cri.go:89] found id: ""
	I1003 18:16:55.365927   38063 logs.go:282] 0 containers: []
	W1003 18:16:55.365937   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:55.365943   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:55.366003   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:55.390838   38063 cri.go:89] found id: ""
	I1003 18:16:55.390855   38063 logs.go:282] 0 containers: []
	W1003 18:16:55.390864   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:55.390870   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:55.390924   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:55.414625   38063 cri.go:89] found id: ""
	I1003 18:16:55.414638   38063 logs.go:282] 0 containers: []
	W1003 18:16:55.414651   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:55.414657   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:55.414712   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:55.438460   38063 cri.go:89] found id: ""
	I1003 18:16:55.438474   38063 logs.go:282] 0 containers: []
	W1003 18:16:55.438480   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:55.438484   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:55.438522   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:55.463131   38063 cri.go:89] found id: ""
	I1003 18:16:55.463148   38063 logs.go:282] 0 containers: []
	W1003 18:16:55.463156   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:55.463165   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:55.463176   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:55.516949   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:55.510276   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:55.510824   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:55.512379   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:55.512767   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:55.514262   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:55.510276   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:55.510824   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:55.512379   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:55.512767   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:55.514262   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:55.516958   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:55.516968   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:55.573992   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:55.574010   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:55.601928   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:55.601944   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:55.667452   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:55.667470   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:58.180268   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:58.190896   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:58.190942   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:58.215802   38063 cri.go:89] found id: ""
	I1003 18:16:58.215820   38063 logs.go:282] 0 containers: []
	W1003 18:16:58.215828   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:58.215835   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:58.215885   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:58.240607   38063 cri.go:89] found id: ""
	I1003 18:16:58.240623   38063 logs.go:282] 0 containers: []
	W1003 18:16:58.240632   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:58.240638   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:58.240719   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:58.264676   38063 cri.go:89] found id: ""
	I1003 18:16:58.264689   38063 logs.go:282] 0 containers: []
	W1003 18:16:58.264696   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:58.264703   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:58.264742   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:58.289482   38063 cri.go:89] found id: ""
	I1003 18:16:58.289496   38063 logs.go:282] 0 containers: []
	W1003 18:16:58.289502   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:58.289507   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:58.289558   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:58.314683   38063 cri.go:89] found id: ""
	I1003 18:16:58.314699   38063 logs.go:282] 0 containers: []
	W1003 18:16:58.314708   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:58.314714   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:58.314763   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:58.340874   38063 cri.go:89] found id: ""
	I1003 18:16:58.340900   38063 logs.go:282] 0 containers: []
	W1003 18:16:58.340910   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:58.340918   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:58.340989   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:58.365744   38063 cri.go:89] found id: ""
	I1003 18:16:58.365765   38063 logs.go:282] 0 containers: []
	W1003 18:16:58.365774   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:58.365785   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:58.365798   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:58.424919   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:58.424938   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:58.452107   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:58.452122   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:58.516078   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:58.516098   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:58.527186   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:58.527200   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:58.581397   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:58.574853   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:58.575363   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:58.576868   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:58.577319   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:58.578848   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:58.574853   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:58.575363   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:58.576868   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:58.577319   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:58.578848   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:01.083146   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:01.093268   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:01.093310   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:01.117816   38063 cri.go:89] found id: ""
	I1003 18:17:01.117833   38063 logs.go:282] 0 containers: []
	W1003 18:17:01.117840   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:01.117844   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:01.117882   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:01.141987   38063 cri.go:89] found id: ""
	I1003 18:17:01.142004   38063 logs.go:282] 0 containers: []
	W1003 18:17:01.142012   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:01.142018   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:01.142057   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:01.165255   38063 cri.go:89] found id: ""
	I1003 18:17:01.165271   38063 logs.go:282] 0 containers: []
	W1003 18:17:01.165277   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:01.165282   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:01.165323   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:01.189244   38063 cri.go:89] found id: ""
	I1003 18:17:01.189257   38063 logs.go:282] 0 containers: []
	W1003 18:17:01.189264   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:01.189269   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:01.189310   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:01.213365   38063 cri.go:89] found id: ""
	I1003 18:17:01.213381   38063 logs.go:282] 0 containers: []
	W1003 18:17:01.213388   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:01.213395   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:01.213442   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:01.240957   38063 cri.go:89] found id: ""
	I1003 18:17:01.240972   38063 logs.go:282] 0 containers: []
	W1003 18:17:01.241000   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:01.241007   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:01.241051   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:01.267290   38063 cri.go:89] found id: ""
	I1003 18:17:01.267306   38063 logs.go:282] 0 containers: []
	W1003 18:17:01.267312   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:01.267320   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:01.267331   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:01.295273   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:01.295290   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:01.364816   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:01.364836   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:01.376420   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:01.376437   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:01.432587   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:01.425391   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:01.425950   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:01.427491   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:01.428036   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:01.429594   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:01.425391   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:01.425950   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:01.427491   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:01.428036   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:01.429594   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:01.432599   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:01.432613   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:03.992551   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:04.002736   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:04.002789   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:04.027153   38063 cri.go:89] found id: ""
	I1003 18:17:04.027169   38063 logs.go:282] 0 containers: []
	W1003 18:17:04.027177   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:04.027183   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:04.027240   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:04.052384   38063 cri.go:89] found id: ""
	I1003 18:17:04.052399   38063 logs.go:282] 0 containers: []
	W1003 18:17:04.052406   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:04.052411   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:04.052458   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:04.077210   38063 cri.go:89] found id: ""
	I1003 18:17:04.077225   38063 logs.go:282] 0 containers: []
	W1003 18:17:04.077233   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:04.077243   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:04.077298   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:04.102192   38063 cri.go:89] found id: ""
	I1003 18:17:04.102208   38063 logs.go:282] 0 containers: []
	W1003 18:17:04.102217   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:04.102223   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:04.102266   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:04.126632   38063 cri.go:89] found id: ""
	I1003 18:17:04.126647   38063 logs.go:282] 0 containers: []
	W1003 18:17:04.126653   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:04.126658   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:04.126700   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:04.152736   38063 cri.go:89] found id: ""
	I1003 18:17:04.152752   38063 logs.go:282] 0 containers: []
	W1003 18:17:04.152761   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:04.152768   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:04.152814   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:04.177062   38063 cri.go:89] found id: ""
	I1003 18:17:04.177080   38063 logs.go:282] 0 containers: []
	W1003 18:17:04.177089   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:04.177099   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:04.177112   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:04.188211   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:04.188225   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:04.242641   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:04.235414   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:04.235943   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:04.237902   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:04.238634   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:04.240168   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:04.235414   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:04.235943   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:04.237902   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:04.238634   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:04.240168   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:04.242649   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:04.242661   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:04.302342   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:04.302368   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:04.330691   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:04.330717   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:06.899448   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:06.909768   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:06.909813   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:06.934090   38063 cri.go:89] found id: ""
	I1003 18:17:06.934103   38063 logs.go:282] 0 containers: []
	W1003 18:17:06.934109   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:06.934114   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:06.934152   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:06.958320   38063 cri.go:89] found id: ""
	I1003 18:17:06.958334   38063 logs.go:282] 0 containers: []
	W1003 18:17:06.958340   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:06.958343   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:06.958381   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:06.984766   38063 cri.go:89] found id: ""
	I1003 18:17:06.984783   38063 logs.go:282] 0 containers: []
	W1003 18:17:06.984792   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:06.984797   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:06.984857   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:07.011801   38063 cri.go:89] found id: ""
	I1003 18:17:07.011818   38063 logs.go:282] 0 containers: []
	W1003 18:17:07.011827   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:07.011832   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:07.011871   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:07.036323   38063 cri.go:89] found id: ""
	I1003 18:17:07.036339   38063 logs.go:282] 0 containers: []
	W1003 18:17:07.036347   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:07.036352   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:07.036402   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:07.061101   38063 cri.go:89] found id: ""
	I1003 18:17:07.061117   38063 logs.go:282] 0 containers: []
	W1003 18:17:07.061126   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:07.061134   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:07.061184   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:07.085274   38063 cri.go:89] found id: ""
	I1003 18:17:07.085286   38063 logs.go:282] 0 containers: []
	W1003 18:17:07.085293   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:07.085300   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:07.085309   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:07.146317   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:07.146334   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:07.175088   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:07.175102   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:07.243716   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:07.243735   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:07.255174   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:07.255190   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:07.308657   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:07.302083   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:07.302582   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:07.304157   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:07.304555   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:07.306037   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:07.302083   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:07.302582   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:07.304157   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:07.304555   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:07.306037   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:09.809372   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:09.819499   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:09.819542   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:09.844409   38063 cri.go:89] found id: ""
	I1003 18:17:09.844423   38063 logs.go:282] 0 containers: []
	W1003 18:17:09.844435   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:09.844439   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:09.844478   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:09.868767   38063 cri.go:89] found id: ""
	I1003 18:17:09.868781   38063 logs.go:282] 0 containers: []
	W1003 18:17:09.868787   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:09.868791   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:09.868832   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:09.891798   38063 cri.go:89] found id: ""
	I1003 18:17:09.891810   38063 logs.go:282] 0 containers: []
	W1003 18:17:09.891817   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:09.891821   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:09.891858   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:09.917378   38063 cri.go:89] found id: ""
	I1003 18:17:09.917393   38063 logs.go:282] 0 containers: []
	W1003 18:17:09.917399   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:09.917405   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:09.917450   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:09.942686   38063 cri.go:89] found id: ""
	I1003 18:17:09.942699   38063 logs.go:282] 0 containers: []
	W1003 18:17:09.942705   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:09.942710   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:09.942750   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:09.966104   38063 cri.go:89] found id: ""
	I1003 18:17:09.966117   38063 logs.go:282] 0 containers: []
	W1003 18:17:09.966123   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:09.966128   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:09.966166   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:09.993525   38063 cri.go:89] found id: ""
	I1003 18:17:09.993538   38063 logs.go:282] 0 containers: []
	W1003 18:17:09.993544   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:09.993551   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:09.993560   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:10.062246   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:10.062265   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:10.074081   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:10.074098   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:10.128788   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:10.122249   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:10.122773   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:10.124287   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:10.124702   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:10.126163   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:10.122249   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:10.122773   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:10.124287   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:10.124702   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:10.126163   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:10.128809   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:10.128820   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:10.186632   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:10.186649   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:12.716320   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:12.726641   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:12.726693   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:12.750384   38063 cri.go:89] found id: ""
	I1003 18:17:12.750397   38063 logs.go:282] 0 containers: []
	W1003 18:17:12.750403   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:12.750407   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:12.750446   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:12.775313   38063 cri.go:89] found id: ""
	I1003 18:17:12.775330   38063 logs.go:282] 0 containers: []
	W1003 18:17:12.775338   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:12.775344   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:12.775384   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:12.800228   38063 cri.go:89] found id: ""
	I1003 18:17:12.800244   38063 logs.go:282] 0 containers: []
	W1003 18:17:12.800251   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:12.800256   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:12.800298   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:12.825275   38063 cri.go:89] found id: ""
	I1003 18:17:12.825291   38063 logs.go:282] 0 containers: []
	W1003 18:17:12.825300   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:12.825317   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:12.825372   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:12.849255   38063 cri.go:89] found id: ""
	I1003 18:17:12.849271   38063 logs.go:282] 0 containers: []
	W1003 18:17:12.849279   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:12.849285   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:12.849336   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:12.873407   38063 cri.go:89] found id: ""
	I1003 18:17:12.873421   38063 logs.go:282] 0 containers: []
	W1003 18:17:12.873427   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:12.873431   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:12.873482   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:12.896762   38063 cri.go:89] found id: ""
	I1003 18:17:12.896778   38063 logs.go:282] 0 containers: []
	W1003 18:17:12.896786   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:12.896795   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:12.896807   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:12.960955   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:12.960983   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:12.972163   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:12.972178   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:13.025479   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:13.018959   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:13.019441   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:13.020904   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:13.021379   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:13.022868   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:13.018959   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:13.019441   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:13.020904   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:13.021379   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:13.022868   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:13.025493   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:13.025506   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:13.086473   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:13.086491   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:15.616095   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:15.626385   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:15.626428   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:15.650771   38063 cri.go:89] found id: ""
	I1003 18:17:15.650785   38063 logs.go:282] 0 containers: []
	W1003 18:17:15.650792   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:15.650796   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:15.650837   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:15.675587   38063 cri.go:89] found id: ""
	I1003 18:17:15.675629   38063 logs.go:282] 0 containers: []
	W1003 18:17:15.675637   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:15.675643   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:15.675705   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:15.699653   38063 cri.go:89] found id: ""
	I1003 18:17:15.699667   38063 logs.go:282] 0 containers: []
	W1003 18:17:15.699673   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:15.699677   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:15.699716   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:15.724414   38063 cri.go:89] found id: ""
	I1003 18:17:15.724427   38063 logs.go:282] 0 containers: []
	W1003 18:17:15.724435   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:15.724441   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:15.724496   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:15.749056   38063 cri.go:89] found id: ""
	I1003 18:17:15.749069   38063 logs.go:282] 0 containers: []
	W1003 18:17:15.749077   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:15.749082   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:15.749123   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:15.773830   38063 cri.go:89] found id: ""
	I1003 18:17:15.773846   38063 logs.go:282] 0 containers: []
	W1003 18:17:15.773859   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:15.773864   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:15.773907   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:15.798104   38063 cri.go:89] found id: ""
	I1003 18:17:15.798120   38063 logs.go:282] 0 containers: []
	W1003 18:17:15.798126   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:15.798133   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:15.798143   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:15.851960   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:15.845372   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:15.845936   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:15.847479   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:15.847794   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:15.849288   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:15.845372   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:15.845936   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:15.847479   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:15.847794   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:15.849288   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:15.851990   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:15.852005   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:15.909042   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:15.909059   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:15.936198   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:15.936212   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:16.001546   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:16.001563   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:18.514268   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:18.524824   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:18.524867   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:18.549240   38063 cri.go:89] found id: ""
	I1003 18:17:18.549252   38063 logs.go:282] 0 containers: []
	W1003 18:17:18.549259   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:18.549263   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:18.549304   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:18.573832   38063 cri.go:89] found id: ""
	I1003 18:17:18.573846   38063 logs.go:282] 0 containers: []
	W1003 18:17:18.573851   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:18.573855   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:18.573893   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:18.600015   38063 cri.go:89] found id: ""
	I1003 18:17:18.600030   38063 logs.go:282] 0 containers: []
	W1003 18:17:18.600038   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:18.600042   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:18.600092   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:18.624175   38063 cri.go:89] found id: ""
	I1003 18:17:18.624187   38063 logs.go:282] 0 containers: []
	W1003 18:17:18.624193   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:18.624197   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:18.624235   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:18.647489   38063 cri.go:89] found id: ""
	I1003 18:17:18.647506   38063 logs.go:282] 0 containers: []
	W1003 18:17:18.647515   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:18.647521   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:18.647563   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:18.671643   38063 cri.go:89] found id: ""
	I1003 18:17:18.671657   38063 logs.go:282] 0 containers: []
	W1003 18:17:18.671663   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:18.671668   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:18.671706   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:18.696078   38063 cri.go:89] found id: ""
	I1003 18:17:18.696092   38063 logs.go:282] 0 containers: []
	W1003 18:17:18.696098   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:18.696105   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:18.696121   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:18.753226   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:18.753245   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:18.780990   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:18.781068   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:18.847947   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:18.847966   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:18.859021   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:18.859037   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:18.912345   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:18.905516   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:18.906367   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:18.907929   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:18.908373   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:18.909849   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:18.905516   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:18.906367   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:18.907929   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:18.908373   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:18.909849   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:21.414030   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:21.425003   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:21.425051   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:21.450060   38063 cri.go:89] found id: ""
	I1003 18:17:21.450073   38063 logs.go:282] 0 containers: []
	W1003 18:17:21.450080   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:21.450085   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:21.450124   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:21.474474   38063 cri.go:89] found id: ""
	I1003 18:17:21.474488   38063 logs.go:282] 0 containers: []
	W1003 18:17:21.474494   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:21.474499   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:21.474539   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:21.498126   38063 cri.go:89] found id: ""
	I1003 18:17:21.498142   38063 logs.go:282] 0 containers: []
	W1003 18:17:21.498149   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:21.498154   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:21.498203   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:21.523905   38063 cri.go:89] found id: ""
	I1003 18:17:21.523923   38063 logs.go:282] 0 containers: []
	W1003 18:17:21.523932   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:21.523938   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:21.524008   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:21.548187   38063 cri.go:89] found id: ""
	I1003 18:17:21.548201   38063 logs.go:282] 0 containers: []
	W1003 18:17:21.548207   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:21.548211   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:21.548252   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:21.572667   38063 cri.go:89] found id: ""
	I1003 18:17:21.572680   38063 logs.go:282] 0 containers: []
	W1003 18:17:21.572686   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:21.572692   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:21.572736   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:21.597807   38063 cri.go:89] found id: ""
	I1003 18:17:21.597824   38063 logs.go:282] 0 containers: []
	W1003 18:17:21.597832   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:21.597839   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:21.597848   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:21.652152   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:21.645230   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:21.645729   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:21.647282   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:21.647701   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:21.649188   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:21.645230   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:21.645729   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:21.647282   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:21.647701   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:21.649188   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:21.652166   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:21.652179   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:21.713448   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:21.713465   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:21.742437   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:21.742451   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:21.805537   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:21.805554   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:24.317361   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:24.327608   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:24.327671   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:24.354286   38063 cri.go:89] found id: ""
	I1003 18:17:24.354305   38063 logs.go:282] 0 containers: []
	W1003 18:17:24.354315   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:24.354320   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:24.354379   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:24.378696   38063 cri.go:89] found id: ""
	I1003 18:17:24.378710   38063 logs.go:282] 0 containers: []
	W1003 18:17:24.378718   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:24.378724   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:24.378782   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:24.402575   38063 cri.go:89] found id: ""
	I1003 18:17:24.402589   38063 logs.go:282] 0 containers: []
	W1003 18:17:24.402595   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:24.402600   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:24.402648   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:24.427138   38063 cri.go:89] found id: ""
	I1003 18:17:24.427154   38063 logs.go:282] 0 containers: []
	W1003 18:17:24.427162   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:24.427169   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:24.427211   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:24.451521   38063 cri.go:89] found id: ""
	I1003 18:17:24.451536   38063 logs.go:282] 0 containers: []
	W1003 18:17:24.451543   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:24.451547   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:24.451590   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:24.475930   38063 cri.go:89] found id: ""
	I1003 18:17:24.475943   38063 logs.go:282] 0 containers: []
	W1003 18:17:24.475949   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:24.475954   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:24.476012   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:24.500074   38063 cri.go:89] found id: ""
	I1003 18:17:24.500087   38063 logs.go:282] 0 containers: []
	W1003 18:17:24.500093   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:24.500100   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:24.500109   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:24.566537   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:24.566553   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:24.577539   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:24.577553   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:24.632738   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:24.626123   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:24.626592   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:24.628151   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:24.628571   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:24.630095   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:24.626123   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:24.626592   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:24.628151   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:24.628571   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:24.630095   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:24.632749   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:24.632758   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:24.690610   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:24.690628   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:27.219340   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:27.229548   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:27.229602   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:27.253625   38063 cri.go:89] found id: ""
	I1003 18:17:27.253647   38063 logs.go:282] 0 containers: []
	W1003 18:17:27.253655   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:27.253661   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:27.253712   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:27.277732   38063 cri.go:89] found id: ""
	I1003 18:17:27.277747   38063 logs.go:282] 0 containers: []
	W1003 18:17:27.277756   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:27.277762   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:27.277804   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:27.301627   38063 cri.go:89] found id: ""
	I1003 18:17:27.301641   38063 logs.go:282] 0 containers: []
	W1003 18:17:27.301647   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:27.301652   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:27.301701   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:27.327361   38063 cri.go:89] found id: ""
	I1003 18:17:27.327377   38063 logs.go:282] 0 containers: []
	W1003 18:17:27.327386   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:27.327392   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:27.327455   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:27.351272   38063 cri.go:89] found id: ""
	I1003 18:17:27.351287   38063 logs.go:282] 0 containers: []
	W1003 18:17:27.351296   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:27.351301   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:27.351354   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:27.376015   38063 cri.go:89] found id: ""
	I1003 18:17:27.376028   38063 logs.go:282] 0 containers: []
	W1003 18:17:27.376034   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:27.376039   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:27.376078   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:27.401069   38063 cri.go:89] found id: ""
	I1003 18:17:27.401083   38063 logs.go:282] 0 containers: []
	W1003 18:17:27.401089   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:27.401096   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:27.401106   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:27.461887   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:27.461903   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:27.489794   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:27.489811   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:27.556416   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:27.556437   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:27.567650   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:27.567666   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:27.621254   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:27.614343   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:27.615016   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:27.616631   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:27.617100   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:27.618643   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:27.614343   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:27.615016   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:27.616631   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:27.617100   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:27.618643   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:30.121948   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:30.132195   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:30.132251   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:30.157028   38063 cri.go:89] found id: ""
	I1003 18:17:30.157044   38063 logs.go:282] 0 containers: []
	W1003 18:17:30.157052   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:30.157059   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:30.157114   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:30.181243   38063 cri.go:89] found id: ""
	I1003 18:17:30.181257   38063 logs.go:282] 0 containers: []
	W1003 18:17:30.181267   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:30.181272   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:30.181327   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:30.204956   38063 cri.go:89] found id: ""
	I1003 18:17:30.204969   38063 logs.go:282] 0 containers: []
	W1003 18:17:30.204990   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:30.204996   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:30.205049   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:30.229309   38063 cri.go:89] found id: ""
	I1003 18:17:30.229324   38063 logs.go:282] 0 containers: []
	W1003 18:17:30.229332   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:30.229353   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:30.229404   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:30.253288   38063 cri.go:89] found id: ""
	I1003 18:17:30.253302   38063 logs.go:282] 0 containers: []
	W1003 18:17:30.253308   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:30.253312   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:30.253353   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:30.276885   38063 cri.go:89] found id: ""
	I1003 18:17:30.276900   38063 logs.go:282] 0 containers: []
	W1003 18:17:30.276907   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:30.276912   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:30.276954   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:30.302076   38063 cri.go:89] found id: ""
	I1003 18:17:30.302093   38063 logs.go:282] 0 containers: []
	W1003 18:17:30.302102   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:30.302111   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:30.302122   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:30.355957   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:30.349507   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:30.350118   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:30.351635   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:30.351999   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:30.353476   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:30.349507   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:30.350118   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:30.351635   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:30.351999   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:30.353476   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:30.355967   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:30.355997   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:30.416595   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:30.416617   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:30.444417   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:30.444433   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:30.511869   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:30.511888   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:33.023698   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:33.034090   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:33.034130   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:33.058440   38063 cri.go:89] found id: ""
	I1003 18:17:33.058454   38063 logs.go:282] 0 containers: []
	W1003 18:17:33.058463   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:33.058469   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:33.058516   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:33.083214   38063 cri.go:89] found id: ""
	I1003 18:17:33.083227   38063 logs.go:282] 0 containers: []
	W1003 18:17:33.083233   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:33.083238   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:33.083278   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:33.107106   38063 cri.go:89] found id: ""
	I1003 18:17:33.107121   38063 logs.go:282] 0 containers: []
	W1003 18:17:33.107128   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:33.107132   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:33.107177   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:33.132152   38063 cri.go:89] found id: ""
	I1003 18:17:33.132169   38063 logs.go:282] 0 containers: []
	W1003 18:17:33.132178   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:33.132184   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:33.132237   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:33.156458   38063 cri.go:89] found id: ""
	I1003 18:17:33.156475   38063 logs.go:282] 0 containers: []
	W1003 18:17:33.156486   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:33.156492   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:33.156541   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:33.181450   38063 cri.go:89] found id: ""
	I1003 18:17:33.181466   38063 logs.go:282] 0 containers: []
	W1003 18:17:33.181474   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:33.181480   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:33.181520   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:33.204281   38063 cri.go:89] found id: ""
	I1003 18:17:33.204299   38063 logs.go:282] 0 containers: []
	W1003 18:17:33.204307   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:33.204316   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:33.204328   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:33.268843   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:33.268862   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:33.280428   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:33.280444   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:33.333875   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:33.327300   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:33.327741   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:33.329337   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:33.329778   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:33.331336   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:33.327300   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:33.327741   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:33.329337   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:33.329778   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:33.331336   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:33.333888   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:33.333899   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:33.395285   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:33.395303   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:35.924723   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:35.935417   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:35.935459   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:35.959423   38063 cri.go:89] found id: ""
	I1003 18:17:35.959437   38063 logs.go:282] 0 containers: []
	W1003 18:17:35.959444   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:35.959448   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:35.959497   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:35.984930   38063 cri.go:89] found id: ""
	I1003 18:17:35.984943   38063 logs.go:282] 0 containers: []
	W1003 18:17:35.984949   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:35.984953   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:35.985011   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:36.010660   38063 cri.go:89] found id: ""
	I1003 18:17:36.010676   38063 logs.go:282] 0 containers: []
	W1003 18:17:36.010685   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:36.010692   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:36.010750   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:36.036836   38063 cri.go:89] found id: ""
	I1003 18:17:36.036851   38063 logs.go:282] 0 containers: []
	W1003 18:17:36.036859   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:36.036865   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:36.036931   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:36.062748   38063 cri.go:89] found id: ""
	I1003 18:17:36.062764   38063 logs.go:282] 0 containers: []
	W1003 18:17:36.062774   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:36.062780   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:36.062832   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:36.088459   38063 cri.go:89] found id: ""
	I1003 18:17:36.088476   38063 logs.go:282] 0 containers: []
	W1003 18:17:36.088485   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:36.088492   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:36.088544   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:36.118150   38063 cri.go:89] found id: ""
	I1003 18:17:36.118166   38063 logs.go:282] 0 containers: []
	W1003 18:17:36.118174   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:36.118183   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:36.118195   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:36.188996   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:36.189016   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:36.201752   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:36.201774   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:36.259714   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:36.253085   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:36.253879   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:36.255461   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:36.255860   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:36.257025   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:36.253085   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:36.253879   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:36.255461   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:36.255860   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:36.257025   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:36.259724   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:36.259734   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:36.319327   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:36.319348   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:38.849084   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:38.860041   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:38.860087   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:38.885371   38063 cri.go:89] found id: ""
	I1003 18:17:38.885387   38063 logs.go:282] 0 containers: []
	W1003 18:17:38.885396   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:38.885403   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:38.885448   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:38.910420   38063 cri.go:89] found id: ""
	I1003 18:17:38.910433   38063 logs.go:282] 0 containers: []
	W1003 18:17:38.910439   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:38.910443   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:38.910492   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:38.935082   38063 cri.go:89] found id: ""
	I1003 18:17:38.935098   38063 logs.go:282] 0 containers: []
	W1003 18:17:38.935113   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:38.935119   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:38.935163   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:38.959589   38063 cri.go:89] found id: ""
	I1003 18:17:38.959605   38063 logs.go:282] 0 containers: []
	W1003 18:17:38.959614   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:38.959620   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:38.959664   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:38.983218   38063 cri.go:89] found id: ""
	I1003 18:17:38.983231   38063 logs.go:282] 0 containers: []
	W1003 18:17:38.983237   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:38.983241   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:38.983283   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:39.007734   38063 cri.go:89] found id: ""
	I1003 18:17:39.007748   38063 logs.go:282] 0 containers: []
	W1003 18:17:39.007754   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:39.007759   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:39.007803   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:39.032274   38063 cri.go:89] found id: ""
	I1003 18:17:39.032288   38063 logs.go:282] 0 containers: []
	W1003 18:17:39.032294   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:39.032301   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:39.032310   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:39.085898   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:39.079359   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:39.079847   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:39.081436   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:39.081830   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:39.083352   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:39.079359   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:39.079847   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:39.081436   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:39.081830   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:39.083352   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:39.085913   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:39.085926   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:39.147336   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:39.147355   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:39.174505   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:39.174520   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:39.236749   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:39.236770   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:41.751919   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:41.762279   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:41.762318   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:41.788348   38063 cri.go:89] found id: ""
	I1003 18:17:41.788364   38063 logs.go:282] 0 containers: []
	W1003 18:17:41.788370   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:41.788375   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:41.788416   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:41.813364   38063 cri.go:89] found id: ""
	I1003 18:17:41.813377   38063 logs.go:282] 0 containers: []
	W1003 18:17:41.813383   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:41.813387   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:41.813428   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:41.838263   38063 cri.go:89] found id: ""
	I1003 18:17:41.838278   38063 logs.go:282] 0 containers: []
	W1003 18:17:41.838286   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:41.838296   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:41.838342   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:41.863852   38063 cri.go:89] found id: ""
	I1003 18:17:41.863866   38063 logs.go:282] 0 containers: []
	W1003 18:17:41.863875   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:41.863880   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:41.863928   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:41.888046   38063 cri.go:89] found id: ""
	I1003 18:17:41.888059   38063 logs.go:282] 0 containers: []
	W1003 18:17:41.888065   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:41.888069   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:41.888123   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:41.912391   38063 cri.go:89] found id: ""
	I1003 18:17:41.912407   38063 logs.go:282] 0 containers: []
	W1003 18:17:41.912414   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:41.912419   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:41.912465   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:41.936635   38063 cri.go:89] found id: ""
	I1003 18:17:41.936652   38063 logs.go:282] 0 containers: []
	W1003 18:17:41.936667   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:41.936673   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:41.936682   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:41.999904   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:41.999923   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:42.010760   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:42.010774   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:42.063379   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:42.056776   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:42.057312   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:42.058864   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:42.059272   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:42.060765   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:42.056776   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:42.057312   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:42.058864   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:42.059272   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:42.060765   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:42.063391   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:42.063403   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:42.120707   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:42.120724   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:44.649184   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:44.659323   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:44.659383   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:44.684688   38063 cri.go:89] found id: ""
	I1003 18:17:44.684705   38063 logs.go:282] 0 containers: []
	W1003 18:17:44.684714   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:44.684720   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:44.684766   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:44.709094   38063 cri.go:89] found id: ""
	I1003 18:17:44.709107   38063 logs.go:282] 0 containers: []
	W1003 18:17:44.709113   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:44.709117   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:44.709155   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:44.733401   38063 cri.go:89] found id: ""
	I1003 18:17:44.733417   38063 logs.go:282] 0 containers: []
	W1003 18:17:44.733426   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:44.733430   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:44.733469   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:44.757753   38063 cri.go:89] found id: ""
	I1003 18:17:44.757772   38063 logs.go:282] 0 containers: []
	W1003 18:17:44.757780   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:44.757786   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:44.757841   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:44.781910   38063 cri.go:89] found id: ""
	I1003 18:17:44.781926   38063 logs.go:282] 0 containers: []
	W1003 18:17:44.781933   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:44.781939   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:44.781995   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:44.805801   38063 cri.go:89] found id: ""
	I1003 18:17:44.805820   38063 logs.go:282] 0 containers: []
	W1003 18:17:44.805829   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:44.805835   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:44.805882   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:44.830172   38063 cri.go:89] found id: ""
	I1003 18:17:44.830187   38063 logs.go:282] 0 containers: []
	W1003 18:17:44.830195   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:44.830204   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:44.830218   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:44.898633   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:44.898651   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:44.909788   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:44.909802   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:44.964112   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:44.957005   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:44.957997   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:44.959562   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:44.960003   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:44.961510   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:44.957005   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:44.957997   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:44.959562   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:44.960003   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:44.961510   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:44.964123   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:44.964137   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:45.022483   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:45.022503   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:47.552208   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:47.562597   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:47.562644   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:47.587653   38063 cri.go:89] found id: ""
	I1003 18:17:47.587666   38063 logs.go:282] 0 containers: []
	W1003 18:17:47.587672   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:47.587676   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:47.587722   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:47.611271   38063 cri.go:89] found id: ""
	I1003 18:17:47.611287   38063 logs.go:282] 0 containers: []
	W1003 18:17:47.611294   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:47.611298   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:47.611344   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:47.635604   38063 cri.go:89] found id: ""
	I1003 18:17:47.635617   38063 logs.go:282] 0 containers: []
	W1003 18:17:47.635625   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:47.635631   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:47.635704   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:47.660903   38063 cri.go:89] found id: ""
	I1003 18:17:47.660926   38063 logs.go:282] 0 containers: []
	W1003 18:17:47.660933   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:47.660938   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:47.661007   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:47.686109   38063 cri.go:89] found id: ""
	I1003 18:17:47.686122   38063 logs.go:282] 0 containers: []
	W1003 18:17:47.686129   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:47.686133   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:47.686172   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:47.710137   38063 cri.go:89] found id: ""
	I1003 18:17:47.710153   38063 logs.go:282] 0 containers: []
	W1003 18:17:47.710161   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:47.710167   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:47.710207   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:47.734797   38063 cri.go:89] found id: ""
	I1003 18:17:47.734817   38063 logs.go:282] 0 containers: []
	W1003 18:17:47.734826   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:47.734835   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:47.734849   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:47.745548   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:47.745565   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:47.799254   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:47.792392   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:47.793029   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:47.794533   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:47.794963   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:47.796403   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:47.792392   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:47.793029   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:47.794533   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:47.794963   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:47.796403   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:47.799265   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:47.799274   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:47.861703   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:47.861720   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:47.888938   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:47.888953   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:50.454766   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:50.465005   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:50.465050   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:50.489074   38063 cri.go:89] found id: ""
	I1003 18:17:50.489087   38063 logs.go:282] 0 containers: []
	W1003 18:17:50.489093   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:50.489098   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:50.489139   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:50.513935   38063 cri.go:89] found id: ""
	I1003 18:17:50.513950   38063 logs.go:282] 0 containers: []
	W1003 18:17:50.513959   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:50.513964   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:50.514027   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:50.539148   38063 cri.go:89] found id: ""
	I1003 18:17:50.539166   38063 logs.go:282] 0 containers: []
	W1003 18:17:50.539173   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:50.539179   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:50.539220   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:50.562923   38063 cri.go:89] found id: ""
	I1003 18:17:50.562944   38063 logs.go:282] 0 containers: []
	W1003 18:17:50.562950   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:50.562959   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:50.563021   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:50.587009   38063 cri.go:89] found id: ""
	I1003 18:17:50.587022   38063 logs.go:282] 0 containers: []
	W1003 18:17:50.587029   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:50.587033   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:50.587081   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:50.611334   38063 cri.go:89] found id: ""
	I1003 18:17:50.611350   38063 logs.go:282] 0 containers: []
	W1003 18:17:50.611356   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:50.611361   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:50.611410   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:50.634818   38063 cri.go:89] found id: ""
	I1003 18:17:50.634832   38063 logs.go:282] 0 containers: []
	W1003 18:17:50.634839   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:50.634846   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:50.634856   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:50.696044   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:50.696061   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:50.722679   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:50.722696   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:50.789104   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:50.789122   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:50.800113   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:50.800126   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:50.853877   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:50.846722   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:50.847312   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:50.848906   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:50.849353   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:50.851079   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:50.846722   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:50.847312   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:50.848906   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:50.849353   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:50.851079   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:53.354772   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:53.365080   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:53.365139   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:53.389900   38063 cri.go:89] found id: ""
	I1003 18:17:53.389913   38063 logs.go:282] 0 containers: []
	W1003 18:17:53.389920   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:53.389930   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:53.389993   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:53.414775   38063 cri.go:89] found id: ""
	I1003 18:17:53.414790   38063 logs.go:282] 0 containers: []
	W1003 18:17:53.414797   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:53.414801   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:53.414847   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:53.439429   38063 cri.go:89] found id: ""
	I1003 18:17:53.439445   38063 logs.go:282] 0 containers: []
	W1003 18:17:53.439454   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:53.439460   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:53.439506   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:53.464200   38063 cri.go:89] found id: ""
	I1003 18:17:53.464214   38063 logs.go:282] 0 containers: []
	W1003 18:17:53.464220   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:53.464225   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:53.464263   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:53.488529   38063 cri.go:89] found id: ""
	I1003 18:17:53.488542   38063 logs.go:282] 0 containers: []
	W1003 18:17:53.488550   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:53.488556   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:53.488612   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:53.512935   38063 cri.go:89] found id: ""
	I1003 18:17:53.512950   38063 logs.go:282] 0 containers: []
	W1003 18:17:53.512957   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:53.512962   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:53.513028   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:53.536738   38063 cri.go:89] found id: ""
	I1003 18:17:53.536754   38063 logs.go:282] 0 containers: []
	W1003 18:17:53.536763   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:53.536771   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:53.536784   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:53.602221   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:53.602237   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:53.613558   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:53.613573   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:53.667019   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:53.660222   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:53.660704   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:53.662310   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:53.662769   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:53.664227   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:53.660222   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:53.660704   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:53.662310   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:53.662769   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:53.664227   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:53.667029   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:53.667039   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:53.725461   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:53.725480   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:56.254692   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:56.264956   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:56.265017   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:56.289747   38063 cri.go:89] found id: ""
	I1003 18:17:56.289764   38063 logs.go:282] 0 containers: []
	W1003 18:17:56.289772   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:56.289779   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:56.289821   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:56.314478   38063 cri.go:89] found id: ""
	I1003 18:17:56.314493   38063 logs.go:282] 0 containers: []
	W1003 18:17:56.314501   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:56.314507   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:56.314557   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:56.338961   38063 cri.go:89] found id: ""
	I1003 18:17:56.338989   38063 logs.go:282] 0 containers: []
	W1003 18:17:56.338998   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:56.339004   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:56.339046   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:56.364770   38063 cri.go:89] found id: ""
	I1003 18:17:56.364784   38063 logs.go:282] 0 containers: []
	W1003 18:17:56.364789   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:56.364793   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:56.364832   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:56.391018   38063 cri.go:89] found id: ""
	I1003 18:17:56.391031   38063 logs.go:282] 0 containers: []
	W1003 18:17:56.391037   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:56.391041   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:56.391081   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:56.415373   38063 cri.go:89] found id: ""
	I1003 18:17:56.415389   38063 logs.go:282] 0 containers: []
	W1003 18:17:56.415398   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:56.415405   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:56.415447   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:56.439537   38063 cri.go:89] found id: ""
	I1003 18:17:56.439554   38063 logs.go:282] 0 containers: []
	W1003 18:17:56.439564   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:56.439572   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:56.439584   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:56.506236   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:56.506256   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:56.517260   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:56.517274   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:56.570626   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:56.564107   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:56.564604   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:56.566115   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:56.566514   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:56.568021   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:56.564107   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:56.564604   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:56.566115   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:56.566514   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:56.568021   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:56.570639   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:56.570658   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:56.633346   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:56.633369   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:59.161404   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:59.171988   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:59.172046   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:59.196437   38063 cri.go:89] found id: ""
	I1003 18:17:59.196449   38063 logs.go:282] 0 containers: []
	W1003 18:17:59.196455   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:59.196459   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:59.196498   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:59.220855   38063 cri.go:89] found id: ""
	I1003 18:17:59.220868   38063 logs.go:282] 0 containers: []
	W1003 18:17:59.220874   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:59.220878   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:59.220926   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:59.246564   38063 cri.go:89] found id: ""
	I1003 18:17:59.246579   38063 logs.go:282] 0 containers: []
	W1003 18:17:59.246587   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:59.246595   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:59.246655   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:59.271407   38063 cri.go:89] found id: ""
	I1003 18:17:59.271422   38063 logs.go:282] 0 containers: []
	W1003 18:17:59.271428   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:59.271433   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:59.271474   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:59.295265   38063 cri.go:89] found id: ""
	I1003 18:17:59.295281   38063 logs.go:282] 0 containers: []
	W1003 18:17:59.295290   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:59.295297   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:59.295344   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:59.319819   38063 cri.go:89] found id: ""
	I1003 18:17:59.319835   38063 logs.go:282] 0 containers: []
	W1003 18:17:59.319849   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:59.319853   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:59.319893   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:59.344045   38063 cri.go:89] found id: ""
	I1003 18:17:59.344058   38063 logs.go:282] 0 containers: []
	W1003 18:17:59.344064   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:59.344071   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:59.344080   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:59.411448   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:59.411465   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:59.422319   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:59.422332   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:59.475228   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:59.468454   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:59.468914   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:59.470455   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:59.470862   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:59.472347   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:59.468454   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:59.468914   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:59.470455   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:59.470862   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:59.472347   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:59.475255   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:59.475270   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:59.536088   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:59.536106   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:02.065737   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:02.076173   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:02.076214   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:02.101478   38063 cri.go:89] found id: ""
	I1003 18:18:02.101495   38063 logs.go:282] 0 containers: []
	W1003 18:18:02.101505   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:02.101513   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:02.101556   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:02.126528   38063 cri.go:89] found id: ""
	I1003 18:18:02.126541   38063 logs.go:282] 0 containers: []
	W1003 18:18:02.126547   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:02.126551   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:02.126591   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:02.150958   38063 cri.go:89] found id: ""
	I1003 18:18:02.150971   38063 logs.go:282] 0 containers: []
	W1003 18:18:02.150997   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:02.151003   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:02.151051   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:02.176464   38063 cri.go:89] found id: ""
	I1003 18:18:02.176478   38063 logs.go:282] 0 containers: []
	W1003 18:18:02.176485   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:02.176497   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:02.176539   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:02.201345   38063 cri.go:89] found id: ""
	I1003 18:18:02.201361   38063 logs.go:282] 0 containers: []
	W1003 18:18:02.201368   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:02.201373   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:02.201415   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:02.227338   38063 cri.go:89] found id: ""
	I1003 18:18:02.227352   38063 logs.go:282] 0 containers: []
	W1003 18:18:02.227359   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:02.227363   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:02.227407   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:02.253859   38063 cri.go:89] found id: ""
	I1003 18:18:02.253875   38063 logs.go:282] 0 containers: []
	W1003 18:18:02.253882   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:02.253890   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:02.253902   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:02.314960   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:02.314986   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:02.343587   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:02.343605   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:02.412159   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:02.412178   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:02.423525   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:02.423542   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:02.480478   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:02.473940   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:02.474565   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:02.476146   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:02.476539   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:02.477814   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:02.473940   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:02.474565   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:02.476146   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:02.476539   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:02.477814   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:04.981110   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:04.992430   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:04.992470   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:05.019218   38063 cri.go:89] found id: ""
	I1003 18:18:05.019232   38063 logs.go:282] 0 containers: []
	W1003 18:18:05.019238   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:05.019243   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:05.019282   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:05.042823   38063 cri.go:89] found id: ""
	I1003 18:18:05.042836   38063 logs.go:282] 0 containers: []
	W1003 18:18:05.042841   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:05.042845   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:05.042902   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:05.069124   38063 cri.go:89] found id: ""
	I1003 18:18:05.069141   38063 logs.go:282] 0 containers: []
	W1003 18:18:05.069148   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:05.069152   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:05.069196   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:05.093833   38063 cri.go:89] found id: ""
	I1003 18:18:05.093848   38063 logs.go:282] 0 containers: []
	W1003 18:18:05.093856   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:05.093862   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:05.093932   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:05.119454   38063 cri.go:89] found id: ""
	I1003 18:18:05.119468   38063 logs.go:282] 0 containers: []
	W1003 18:18:05.119475   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:05.119479   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:05.119523   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:05.143897   38063 cri.go:89] found id: ""
	I1003 18:18:05.143914   38063 logs.go:282] 0 containers: []
	W1003 18:18:05.143920   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:05.143925   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:05.143966   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:05.167637   38063 cri.go:89] found id: ""
	I1003 18:18:05.167650   38063 logs.go:282] 0 containers: []
	W1003 18:18:05.167656   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:05.167663   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:05.167674   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:05.195697   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:05.195715   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:05.260408   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:05.260428   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:05.271292   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:05.271309   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:05.324867   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:05.318440   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:05.318912   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:05.320332   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:05.320733   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:05.322261   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:05.318440   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:05.318912   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:05.320332   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:05.320733   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:05.322261   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:05.324886   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:05.324898   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:07.885833   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:07.895849   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:07.895957   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:07.921467   38063 cri.go:89] found id: ""
	I1003 18:18:07.921479   38063 logs.go:282] 0 containers: []
	W1003 18:18:07.921485   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:07.921490   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:07.921545   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:07.945467   38063 cri.go:89] found id: ""
	I1003 18:18:07.945480   38063 logs.go:282] 0 containers: []
	W1003 18:18:07.945487   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:07.945492   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:07.945539   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:07.970084   38063 cri.go:89] found id: ""
	I1003 18:18:07.970098   38063 logs.go:282] 0 containers: []
	W1003 18:18:07.970105   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:07.970110   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:07.970148   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:07.994263   38063 cri.go:89] found id: ""
	I1003 18:18:07.994278   38063 logs.go:282] 0 containers: []
	W1003 18:18:07.994287   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:07.994293   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:07.994334   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:08.018778   38063 cri.go:89] found id: ""
	I1003 18:18:08.018793   38063 logs.go:282] 0 containers: []
	W1003 18:18:08.018800   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:08.018805   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:08.018844   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:08.043138   38063 cri.go:89] found id: ""
	I1003 18:18:08.043153   38063 logs.go:282] 0 containers: []
	W1003 18:18:08.043159   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:08.043164   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:08.043203   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:08.067785   38063 cri.go:89] found id: ""
	I1003 18:18:08.067799   38063 logs.go:282] 0 containers: []
	W1003 18:18:08.067805   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:08.067811   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:08.067820   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:08.136408   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:08.136429   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:08.147427   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:08.147445   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:08.201110   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:08.194693   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:08.195161   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:08.196715   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:08.197135   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:08.198610   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:08.194693   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:08.195161   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:08.196715   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:08.197135   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:08.198610   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:08.201124   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:08.201135   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:08.261991   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:08.262010   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:10.791196   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:10.801467   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:10.801525   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:10.827655   38063 cri.go:89] found id: ""
	I1003 18:18:10.827672   38063 logs.go:282] 0 containers: []
	W1003 18:18:10.827683   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:10.827688   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:10.827735   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:10.852558   38063 cri.go:89] found id: ""
	I1003 18:18:10.852574   38063 logs.go:282] 0 containers: []
	W1003 18:18:10.852582   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:10.852588   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:10.852638   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:10.876842   38063 cri.go:89] found id: ""
	I1003 18:18:10.876858   38063 logs.go:282] 0 containers: []
	W1003 18:18:10.876870   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:10.876874   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:10.876918   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:10.902827   38063 cri.go:89] found id: ""
	I1003 18:18:10.902840   38063 logs.go:282] 0 containers: []
	W1003 18:18:10.902846   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:10.902851   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:10.902890   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:10.927840   38063 cri.go:89] found id: ""
	I1003 18:18:10.927855   38063 logs.go:282] 0 containers: []
	W1003 18:18:10.927861   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:10.927865   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:10.927909   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:10.952535   38063 cri.go:89] found id: ""
	I1003 18:18:10.952550   38063 logs.go:282] 0 containers: []
	W1003 18:18:10.952556   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:10.952561   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:10.952602   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:10.976585   38063 cri.go:89] found id: ""
	I1003 18:18:10.976601   38063 logs.go:282] 0 containers: []
	W1003 18:18:10.976610   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:10.976620   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:10.976631   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:10.987359   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:10.987373   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:11.041048   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:11.034604   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:11.035105   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:11.036603   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:11.036989   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:11.038508   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:11.034604   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:11.035105   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:11.036603   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:11.036989   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:11.038508   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:11.041058   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:11.041068   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:11.101637   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:11.101658   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:11.128867   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:11.128885   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:13.697689   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:13.708864   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:13.708949   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:13.733837   38063 cri.go:89] found id: ""
	I1003 18:18:13.733851   38063 logs.go:282] 0 containers: []
	W1003 18:18:13.733857   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:13.733864   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:13.733915   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:13.757681   38063 cri.go:89] found id: ""
	I1003 18:18:13.757698   38063 logs.go:282] 0 containers: []
	W1003 18:18:13.757707   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:13.757713   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:13.757778   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:13.782545   38063 cri.go:89] found id: ""
	I1003 18:18:13.782560   38063 logs.go:282] 0 containers: []
	W1003 18:18:13.782572   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:13.782576   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:13.782624   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:13.806939   38063 cri.go:89] found id: ""
	I1003 18:18:13.806955   38063 logs.go:282] 0 containers: []
	W1003 18:18:13.806964   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:13.806970   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:13.807041   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:13.831768   38063 cri.go:89] found id: ""
	I1003 18:18:13.831783   38063 logs.go:282] 0 containers: []
	W1003 18:18:13.831790   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:13.831795   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:13.831837   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:13.856076   38063 cri.go:89] found id: ""
	I1003 18:18:13.856093   38063 logs.go:282] 0 containers: []
	W1003 18:18:13.856101   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:13.856107   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:13.856163   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:13.879410   38063 cri.go:89] found id: ""
	I1003 18:18:13.879423   38063 logs.go:282] 0 containers: []
	W1003 18:18:13.879430   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:13.879438   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:13.879450   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:13.944708   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:13.944727   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:13.956175   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:13.956194   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:14.010487   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:14.003834   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:14.004418   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:14.005911   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:14.006368   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:14.007894   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:14.003834   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:14.004418   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:14.005911   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:14.006368   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:14.007894   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:14.010499   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:14.010514   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:14.071892   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:14.071911   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:16.601878   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:16.612139   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:16.612183   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:16.635115   38063 cri.go:89] found id: ""
	I1003 18:18:16.635128   38063 logs.go:282] 0 containers: []
	W1003 18:18:16.635134   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:16.635139   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:16.635180   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:16.660332   38063 cri.go:89] found id: ""
	I1003 18:18:16.660347   38063 logs.go:282] 0 containers: []
	W1003 18:18:16.660354   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:16.660361   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:16.660416   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:16.683528   38063 cri.go:89] found id: ""
	I1003 18:18:16.683551   38063 logs.go:282] 0 containers: []
	W1003 18:18:16.683560   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:16.683566   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:16.683619   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:16.708287   38063 cri.go:89] found id: ""
	I1003 18:18:16.708304   38063 logs.go:282] 0 containers: []
	W1003 18:18:16.708313   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:16.708319   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:16.708368   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:16.732627   38063 cri.go:89] found id: ""
	I1003 18:18:16.732642   38063 logs.go:282] 0 containers: []
	W1003 18:18:16.732651   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:16.732670   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:16.732712   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:16.757768   38063 cri.go:89] found id: ""
	I1003 18:18:16.757782   38063 logs.go:282] 0 containers: []
	W1003 18:18:16.757788   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:16.757793   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:16.757836   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:16.781970   38063 cri.go:89] found id: ""
	I1003 18:18:16.781997   38063 logs.go:282] 0 containers: []
	W1003 18:18:16.782011   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:16.782020   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:16.782036   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:16.850796   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:16.850813   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:16.862129   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:16.862143   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:16.915039   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:16.908470   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:16.908860   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:16.910345   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:16.910711   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:16.912263   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:16.908470   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:16.908860   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:16.910345   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:16.910711   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:16.912263   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:16.915050   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:16.915063   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:16.972388   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:16.972405   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:19.502094   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:19.512481   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:19.512541   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:19.537212   38063 cri.go:89] found id: ""
	I1003 18:18:19.537228   38063 logs.go:282] 0 containers: []
	W1003 18:18:19.537236   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:19.537242   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:19.537305   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:19.561717   38063 cri.go:89] found id: ""
	I1003 18:18:19.561734   38063 logs.go:282] 0 containers: []
	W1003 18:18:19.561741   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:19.561746   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:19.561793   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:19.585423   38063 cri.go:89] found id: ""
	I1003 18:18:19.585436   38063 logs.go:282] 0 containers: []
	W1003 18:18:19.585443   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:19.585447   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:19.585490   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:19.609708   38063 cri.go:89] found id: ""
	I1003 18:18:19.609722   38063 logs.go:282] 0 containers: []
	W1003 18:18:19.609728   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:19.609733   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:19.609772   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:19.632853   38063 cri.go:89] found id: ""
	I1003 18:18:19.632869   38063 logs.go:282] 0 containers: []
	W1003 18:18:19.632878   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:19.632884   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:19.632933   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:19.656204   38063 cri.go:89] found id: ""
	I1003 18:18:19.656220   38063 logs.go:282] 0 containers: []
	W1003 18:18:19.656228   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:19.656235   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:19.656287   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:19.680640   38063 cri.go:89] found id: ""
	I1003 18:18:19.680663   38063 logs.go:282] 0 containers: []
	W1003 18:18:19.680669   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:19.680677   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:19.680689   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:19.707259   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:19.707275   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:19.774362   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:19.774380   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:19.785563   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:19.785577   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:19.839901   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:19.833112   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:19.833732   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:19.835306   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:19.835682   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:19.837164   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:19.833112   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:19.833732   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:19.835306   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:19.835682   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:19.837164   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:19.839911   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:19.839921   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:22.400537   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:22.410712   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:22.410758   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:22.434956   38063 cri.go:89] found id: ""
	I1003 18:18:22.434970   38063 logs.go:282] 0 containers: []
	W1003 18:18:22.434988   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:22.434995   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:22.435050   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:22.459920   38063 cri.go:89] found id: ""
	I1003 18:18:22.459936   38063 logs.go:282] 0 containers: []
	W1003 18:18:22.459945   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:22.459950   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:22.460011   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:22.484807   38063 cri.go:89] found id: ""
	I1003 18:18:22.484821   38063 logs.go:282] 0 containers: []
	W1003 18:18:22.484827   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:22.484832   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:22.484876   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:22.510038   38063 cri.go:89] found id: ""
	I1003 18:18:22.510055   38063 logs.go:282] 0 containers: []
	W1003 18:18:22.510063   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:22.510069   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:22.510127   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:22.534586   38063 cri.go:89] found id: ""
	I1003 18:18:22.534606   38063 logs.go:282] 0 containers: []
	W1003 18:18:22.534616   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:22.534622   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:22.534684   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:22.559759   38063 cri.go:89] found id: ""
	I1003 18:18:22.559776   38063 logs.go:282] 0 containers: []
	W1003 18:18:22.559785   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:22.559791   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:22.559847   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:22.584554   38063 cri.go:89] found id: ""
	I1003 18:18:22.584569   38063 logs.go:282] 0 containers: []
	W1003 18:18:22.584579   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:22.584588   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:22.584602   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:22.653550   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:22.653568   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:22.664744   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:22.664760   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:22.718670   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:22.712190   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:22.712660   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:22.714209   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:22.714609   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:22.716119   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:22.712190   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:22.712660   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:22.714209   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:22.714609   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:22.716119   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:22.718679   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:22.718689   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:22.781634   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:22.781662   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:25.311342   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:25.321538   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:25.321589   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:25.347212   38063 cri.go:89] found id: ""
	I1003 18:18:25.347228   38063 logs.go:282] 0 containers: []
	W1003 18:18:25.347237   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:25.347244   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:25.347288   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:25.373240   38063 cri.go:89] found id: ""
	I1003 18:18:25.373255   38063 logs.go:282] 0 containers: []
	W1003 18:18:25.373261   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:25.373265   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:25.373316   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:25.398262   38063 cri.go:89] found id: ""
	I1003 18:18:25.398280   38063 logs.go:282] 0 containers: []
	W1003 18:18:25.398287   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:25.398293   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:25.398340   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:25.423522   38063 cri.go:89] found id: ""
	I1003 18:18:25.423536   38063 logs.go:282] 0 containers: []
	W1003 18:18:25.423544   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:25.423550   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:25.423609   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:25.448232   38063 cri.go:89] found id: ""
	I1003 18:18:25.448249   38063 logs.go:282] 0 containers: []
	W1003 18:18:25.448258   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:25.448264   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:25.448311   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:25.474690   38063 cri.go:89] found id: ""
	I1003 18:18:25.474704   38063 logs.go:282] 0 containers: []
	W1003 18:18:25.474710   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:25.474716   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:25.474766   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:25.499693   38063 cri.go:89] found id: ""
	I1003 18:18:25.499707   38063 logs.go:282] 0 containers: []
	W1003 18:18:25.499715   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:25.499723   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:25.499733   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:25.526210   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:25.526225   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:25.595354   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:25.595373   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:25.606969   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:25.606998   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:25.662186   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:25.655368   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:25.655970   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:25.657492   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:25.657931   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:25.659386   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:25.655368   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:25.655970   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:25.657492   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:25.657931   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:25.659386   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:25.662197   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:25.662206   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:28.226017   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:28.237132   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:28.237175   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:28.262449   38063 cri.go:89] found id: ""
	I1003 18:18:28.262466   38063 logs.go:282] 0 containers: []
	W1003 18:18:28.262474   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:28.262479   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:28.262524   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:28.287653   38063 cri.go:89] found id: ""
	I1003 18:18:28.287669   38063 logs.go:282] 0 containers: []
	W1003 18:18:28.287679   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:28.287685   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:28.287730   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:28.313255   38063 cri.go:89] found id: ""
	I1003 18:18:28.313269   38063 logs.go:282] 0 containers: []
	W1003 18:18:28.313276   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:28.313280   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:28.313321   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:28.338727   38063 cri.go:89] found id: ""
	I1003 18:18:28.338742   38063 logs.go:282] 0 containers: []
	W1003 18:18:28.338748   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:28.338752   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:28.338793   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:28.363285   38063 cri.go:89] found id: ""
	I1003 18:18:28.363303   38063 logs.go:282] 0 containers: []
	W1003 18:18:28.363312   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:28.363317   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:28.363359   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:28.388945   38063 cri.go:89] found id: ""
	I1003 18:18:28.388958   38063 logs.go:282] 0 containers: []
	W1003 18:18:28.388964   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:28.388969   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:28.389039   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:28.414591   38063 cri.go:89] found id: ""
	I1003 18:18:28.414607   38063 logs.go:282] 0 containers: []
	W1003 18:18:28.414614   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:28.414621   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:28.414630   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:28.425367   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:28.425382   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:28.479472   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:28.472065   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:28.472604   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:28.474900   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:28.475366   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:28.476874   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:28.472065   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:28.472604   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:28.474900   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:28.475366   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:28.476874   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:28.479481   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:28.479491   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:28.538844   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:28.538865   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:28.567294   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:28.567309   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:31.138009   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:31.148430   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:31.148480   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:31.173355   38063 cri.go:89] found id: ""
	I1003 18:18:31.173368   38063 logs.go:282] 0 containers: []
	W1003 18:18:31.173375   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:31.173380   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:31.173418   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:31.198151   38063 cri.go:89] found id: ""
	I1003 18:18:31.198166   38063 logs.go:282] 0 containers: []
	W1003 18:18:31.198181   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:31.198187   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:31.198231   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:31.223275   38063 cri.go:89] found id: ""
	I1003 18:18:31.223290   38063 logs.go:282] 0 containers: []
	W1003 18:18:31.223296   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:31.223300   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:31.223343   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:31.247221   38063 cri.go:89] found id: ""
	I1003 18:18:31.247237   38063 logs.go:282] 0 containers: []
	W1003 18:18:31.247248   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:31.247253   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:31.247310   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:31.270563   38063 cri.go:89] found id: ""
	I1003 18:18:31.270576   38063 logs.go:282] 0 containers: []
	W1003 18:18:31.270582   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:31.270586   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:31.270636   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:31.295134   38063 cri.go:89] found id: ""
	I1003 18:18:31.295150   38063 logs.go:282] 0 containers: []
	W1003 18:18:31.295159   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:31.295165   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:31.295204   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:31.319654   38063 cri.go:89] found id: ""
	I1003 18:18:31.319668   38063 logs.go:282] 0 containers: []
	W1003 18:18:31.319675   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:31.319683   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:31.319698   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:31.386428   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:31.386448   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:31.397662   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:31.397677   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:31.451288   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:31.444650   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:31.445190   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:31.446750   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:31.447199   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:31.448658   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:31.444650   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:31.445190   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:31.446750   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:31.447199   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:31.448658   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:31.451299   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:31.451309   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:31.510468   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:31.510487   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:34.039627   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:34.050185   38063 kubeadm.go:601] duration metric: took 4m1.950557888s to restartPrimaryControlPlane
	W1003 18:18:34.050251   38063 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1003 18:18:34.050324   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1003 18:18:34.501082   38063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:18:34.513430   38063 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 18:18:34.521102   38063 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:18:34.521139   38063 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:18:34.528531   38063 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:18:34.528540   38063 kubeadm.go:157] found existing configuration files:
	
	I1003 18:18:34.528574   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1003 18:18:34.535908   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:18:34.535967   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:18:34.543072   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1003 18:18:34.550220   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:18:34.550263   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:18:34.557251   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1003 18:18:34.565090   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:18:34.565130   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:18:34.571882   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1003 18:18:34.579174   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:18:34.579210   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:18:34.585996   38063 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:18:34.620715   38063 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:18:34.620773   38063 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:18:34.639243   38063 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:18:34.639317   38063 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 18:18:34.639360   38063 kubeadm.go:318] OS: Linux
	I1003 18:18:34.639397   38063 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:18:34.639466   38063 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:18:34.639529   38063 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:18:34.639587   38063 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:18:34.639687   38063 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:18:34.639749   38063 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:18:34.639803   38063 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:18:34.639863   38063 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 18:18:34.692781   38063 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:18:34.692898   38063 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:18:34.693025   38063 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:18:34.699300   38063 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:18:34.703358   38063 out.go:252]   - Generating certificates and keys ...
	I1003 18:18:34.703438   38063 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:18:34.703491   38063 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:18:34.703553   38063 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1003 18:18:34.703602   38063 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1003 18:18:34.703664   38063 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1003 18:18:34.703733   38063 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1003 18:18:34.703790   38063 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1003 18:18:34.703840   38063 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1003 18:18:34.703900   38063 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1003 18:18:34.703962   38063 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1003 18:18:34.704000   38063 kubeadm.go:318] [certs] Using the existing "sa" key
	I1003 18:18:34.704043   38063 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:18:34.953422   38063 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:18:35.214353   38063 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:18:35.447415   38063 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:18:35.645347   38063 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:18:36.220332   38063 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:18:36.220714   38063 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:18:36.222788   38063 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:18:36.225372   38063 out.go:252]   - Booting up control plane ...
	I1003 18:18:36.225492   38063 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:18:36.225605   38063 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:18:36.225672   38063 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:18:36.237955   38063 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:18:36.238117   38063 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:18:36.244390   38063 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:18:36.244573   38063 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:18:36.244608   38063 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:18:36.339701   38063 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:18:36.339860   38063 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:18:36.841336   38063 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.785786ms
	I1003 18:18:36.845100   38063 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:18:36.845207   38063 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1003 18:18:36.845308   38063 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:18:36.845418   38063 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:22:36.846410   38063 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001254073s
	I1003 18:22:36.846572   38063 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001316832s
	I1003 18:22:36.846680   38063 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00135784s
	I1003 18:22:36.846684   38063 kubeadm.go:318] 
	I1003 18:22:36.846803   38063 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 18:22:36.846887   38063 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:22:36.847019   38063 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 18:22:36.847152   38063 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 18:22:36.847221   38063 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 18:22:36.847290   38063 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 18:22:36.847293   38063 kubeadm.go:318] 
	I1003 18:22:36.850267   38063 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 18:22:36.850420   38063 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:22:36.851109   38063 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1003 18:22:36.851222   38063 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1003 18:22:36.851310   38063 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.785786ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001254073s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001316832s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00135784s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1003 18:22:36.851378   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1003 18:22:37.292774   38063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:22:37.305190   38063 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:22:37.305239   38063 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:22:37.312706   38063 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:22:37.312714   38063 kubeadm.go:157] found existing configuration files:
	
	I1003 18:22:37.312747   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1003 18:22:37.319873   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:22:37.319914   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:22:37.326628   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1003 18:22:37.333616   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:22:37.333654   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:22:37.340503   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1003 18:22:37.347489   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:22:37.347533   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:22:37.354448   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1003 18:22:37.361615   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:22:37.361649   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:22:37.368313   38063 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:22:37.421185   38063 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 18:22:37.475455   38063 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:26:40.291288   38063 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1003 18:26:40.291385   38063 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1003 18:26:40.294089   38063 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:26:40.294149   38063 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:26:40.294247   38063 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:26:40.294331   38063 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 18:26:40.294363   38063 kubeadm.go:318] OS: Linux
	I1003 18:26:40.294399   38063 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:26:40.294467   38063 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:26:40.294515   38063 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:26:40.294554   38063 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:26:40.294601   38063 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:26:40.294658   38063 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:26:40.294706   38063 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:26:40.294741   38063 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 18:26:40.294849   38063 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:26:40.294960   38063 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:26:40.295057   38063 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:26:40.295109   38063 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:26:40.297835   38063 out.go:252]   - Generating certificates and keys ...
	I1003 18:26:40.297914   38063 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:26:40.297990   38063 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:26:40.298082   38063 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1003 18:26:40.298152   38063 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1003 18:26:40.298217   38063 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1003 18:26:40.298275   38063 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1003 18:26:40.298326   38063 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1003 18:26:40.298376   38063 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1003 18:26:40.298444   38063 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1003 18:26:40.298519   38063 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1003 18:26:40.298554   38063 kubeadm.go:318] [certs] Using the existing "sa" key
	I1003 18:26:40.298605   38063 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:26:40.298646   38063 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:26:40.298698   38063 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:26:40.298740   38063 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:26:40.298791   38063 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:26:40.298839   38063 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:26:40.298907   38063 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:26:40.298998   38063 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:26:40.300468   38063 out.go:252]   - Booting up control plane ...
	I1003 18:26:40.300542   38063 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:26:40.300632   38063 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:26:40.300695   38063 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:26:40.300779   38063 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:26:40.300871   38063 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:26:40.300963   38063 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:26:40.301061   38063 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:26:40.301100   38063 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:26:40.301207   38063 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:26:40.301294   38063 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:26:40.301341   38063 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500810972s
	I1003 18:26:40.301415   38063 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:26:40.301479   38063 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1003 18:26:40.301550   38063 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:26:40.301629   38063 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:26:40.301688   38063 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001083242s
	I1003 18:26:40.301753   38063 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001112366s
	I1003 18:26:40.301845   38063 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001257154s
	I1003 18:26:40.301849   38063 kubeadm.go:318] 
	I1003 18:26:40.301925   38063 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 18:26:40.302009   38063 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:26:40.302080   38063 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 18:26:40.302157   38063 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 18:26:40.302217   38063 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 18:26:40.302288   38063 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 18:26:40.302308   38063 kubeadm.go:318] 
	I1003 18:26:40.302352   38063 kubeadm.go:402] duration metric: took 12m8.237590419s to StartCluster
	I1003 18:26:40.302401   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:26:40.302450   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:26:40.329135   38063 cri.go:89] found id: ""
	I1003 18:26:40.329148   38063 logs.go:282] 0 containers: []
	W1003 18:26:40.329154   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:26:40.329160   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:26:40.329203   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:26:40.354340   38063 cri.go:89] found id: ""
	I1003 18:26:40.354354   38063 logs.go:282] 0 containers: []
	W1003 18:26:40.354361   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:26:40.354366   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:26:40.354419   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:26:40.380556   38063 cri.go:89] found id: ""
	I1003 18:26:40.380570   38063 logs.go:282] 0 containers: []
	W1003 18:26:40.380576   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:26:40.380581   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:26:40.380640   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:26:40.406655   38063 cri.go:89] found id: ""
	I1003 18:26:40.406670   38063 logs.go:282] 0 containers: []
	W1003 18:26:40.406677   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:26:40.406683   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:26:40.406728   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:26:40.432698   38063 cri.go:89] found id: ""
	I1003 18:26:40.432713   38063 logs.go:282] 0 containers: []
	W1003 18:26:40.432720   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:26:40.432725   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:26:40.432769   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:26:40.459363   38063 cri.go:89] found id: ""
	I1003 18:26:40.459378   38063 logs.go:282] 0 containers: []
	W1003 18:26:40.459384   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:26:40.459390   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:26:40.459437   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:26:40.484951   38063 cri.go:89] found id: ""
	I1003 18:26:40.484964   38063 logs.go:282] 0 containers: []
	W1003 18:26:40.484971   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:26:40.484997   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:26:40.485019   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:26:40.549245   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:26:40.549263   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:26:40.560727   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:26:40.560741   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:26:40.616474   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:26:40.609386   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:40.610009   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:40.611564   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:40.611939   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:40.613451   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:26:40.609386   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:40.610009   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:40.611564   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:40.611939   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:40.613451   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:26:40.616500   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:26:40.616509   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:26:40.676470   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:26:40.676488   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1003 18:26:40.704576   38063 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500810972s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001083242s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001112366s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001257154s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1003 18:26:40.704638   38063 out.go:285] * 
	W1003 18:26:40.704701   38063 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500810972s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001083242s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001112366s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001257154s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:26:40.704715   38063 out.go:285] * 
	W1003 18:26:40.706538   38063 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:26:40.710390   38063 out.go:203] 
	W1003 18:26:40.711880   38063 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500810972s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001083242s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001112366s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001257154s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:26:40.711903   38063 out.go:285] * 
	I1003 18:26:40.714182   38063 out.go:203] 
	
	
	==> CRI-O <==
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.31803988Z" level=info msg="Checking image status: kicbase/echo-server:functional-889240" id=8dbb9dc5-c0bc-4fb6-8380-f67e530bd701 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.351152043Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-889240" id=f8b0cdf9-8a3c-47b8-827a-041430aa968f name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.3512939Z" level=info msg="Image docker.io/kicbase/echo-server:functional-889240 not found" id=f8b0cdf9-8a3c-47b8-827a-041430aa968f name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.351334555Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-889240 found" id=f8b0cdf9-8a3c-47b8-827a-041430aa968f name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.385065792Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-889240" id=18ee7915-b6dc-477f-a8be-8e74388993fd name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.38551812Z" level=info msg="Image localhost/kicbase/echo-server:functional-889240 not found" id=18ee7915-b6dc-477f-a8be-8e74388993fd name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.385573149Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-889240 found" id=18ee7915-b6dc-477f-a8be-8e74388993fd name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.925244555Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=3cdd4878-6c29-4f9c-a7c1-e8d24b35f518 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.926186275Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=999b78e6-746a-4495-9410-a789f6c9b2d1 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.927345786Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-889240/kube-apiserver" id=a287f5c1-738c-44f4-93cf-fa8b273170d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.927608075Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.935572875Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.937507124Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.951683101Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=a287f5c1-738c-44f4-93cf-fa8b273170d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.953670541Z" level=info msg="createCtr: deleting container ID 94a45024cee963f25522950daa008598cc2b6a92c31321cf665c9a52bed71c52 from idIndex" id=a287f5c1-738c-44f4-93cf-fa8b273170d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.953726976Z" level=info msg="createCtr: removing container 94a45024cee963f25522950daa008598cc2b6a92c31321cf665c9a52bed71c52" id=a287f5c1-738c-44f4-93cf-fa8b273170d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.953775078Z" level=info msg="createCtr: deleting container 94a45024cee963f25522950daa008598cc2b6a92c31321cf665c9a52bed71c52 from storage" id=a287f5c1-738c-44f4-93cf-fa8b273170d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.957548935Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-889240_kube-system_9d9b7aefd7427246dd018814b6979298_0" id=a287f5c1-738c-44f4-93cf-fa8b273170d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:51 functional-889240 crio[5881]: time="2025-10-03T18:26:51.292108767Z" level=info msg="Checking image status: kicbase/echo-server:functional-889240" id=abbb2808-ed68-484b-b163-379c059f6d17 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:51 functional-889240 crio[5881]: time="2025-10-03T18:26:51.319279189Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-889240" id=d452395f-c84e-40da-918e-c48346047241 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:51 functional-889240 crio[5881]: time="2025-10-03T18:26:51.319865155Z" level=info msg="Image docker.io/kicbase/echo-server:functional-889240 not found" id=d452395f-c84e-40da-918e-c48346047241 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:51 functional-889240 crio[5881]: time="2025-10-03T18:26:51.319920152Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-889240 found" id=d452395f-c84e-40da-918e-c48346047241 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:51 functional-889240 crio[5881]: time="2025-10-03T18:26:51.352587677Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-889240" id=5d9b33fc-7d75-497a-8748-fc1b3d440fcc name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:51 functional-889240 crio[5881]: time="2025-10-03T18:26:51.352740621Z" level=info msg="Image localhost/kicbase/echo-server:functional-889240 not found" id=5d9b33fc-7d75-497a-8748-fc1b3d440fcc name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:51 functional-889240 crio[5881]: time="2025-10-03T18:26:51.352785301Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-889240 found" id=5d9b33fc-7d75-497a-8748-fc1b3d440fcc name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:26:51.859459   17289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:51.860144   17289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:51.861788   17289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:51.862334   17289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:51.863939   17289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 3 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001870] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.374530] i8042: Warning: Keylock active
	[  +0.010846] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003424] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000658] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000637] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000691] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.479345] block sda: the capability attribute has been deprecated.
	[  +0.086934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025583] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.992810] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:26:51 up  1:09,  0 user,  load average: 1.01, 0.26, 0.10
	Linux functional-889240 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 03 18:26:45 functional-889240 kubelet[15004]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-889240_kube-system(7e715cb6024854d45a9fa99576167e43): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:26:45 functional-889240 kubelet[15004]:  > logger="UnhandledError"
	Oct 03 18:26:45 functional-889240 kubelet[15004]: E1003 18:26:45.950027   15004 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-889240" podUID="7e715cb6024854d45a9fa99576167e43"
	Oct 03 18:26:47 functional-889240 kubelet[15004]: E1003 18:26:47.292830   15004 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-889240&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 03 18:26:47 functional-889240 kubelet[15004]: E1003 18:26:47.311203   15004 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-889240.186b0e42e698a181  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-889240,UID:functional-889240,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-889240 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-889240,},FirstTimestamp:2025-10-03 18:22:39.917703553 +0000 UTC m=+1.131431312,LastTimestamp:2025-10-03 18:22:39.917703553 +0000 UTC m=+1.131431312,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-889240,}"
	Oct 03 18:26:47 functional-889240 kubelet[15004]: E1003 18:26:47.924783   15004 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-889240\" not found" node="functional-889240"
	Oct 03 18:26:47 functional-889240 kubelet[15004]: E1003 18:26:47.967879   15004 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:26:47 functional-889240 kubelet[15004]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:26:47 functional-889240 kubelet[15004]:  > podSandboxID="cc37714218db619cb7a417ce510ab6d24921b06cab2510376343b7b5c57bba9a"
	Oct 03 18:26:47 functional-889240 kubelet[15004]: E1003 18:26:47.967997   15004 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:26:47 functional-889240 kubelet[15004]:         container kube-scheduler start failed in pod kube-scheduler-functional-889240_kube-system(7dadd1df42d6a2c3d1907f134f7d5ea7): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:26:47 functional-889240 kubelet[15004]:  > logger="UnhandledError"
	Oct 03 18:26:47 functional-889240 kubelet[15004]: E1003 18:26:47.968041   15004 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-889240" podUID="7dadd1df42d6a2c3d1907f134f7d5ea7"
	Oct 03 18:26:49 functional-889240 kubelet[15004]: E1003 18:26:49.940484   15004 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-889240\" not found"
	Oct 03 18:26:50 functional-889240 kubelet[15004]: E1003 18:26:50.548387   15004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-889240?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 03 18:26:50 functional-889240 kubelet[15004]: I1003 18:26:50.701447   15004 kubelet_node_status.go:75] "Attempting to register node" node="functional-889240"
	Oct 03 18:26:50 functional-889240 kubelet[15004]: E1003 18:26:50.702007   15004 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-889240"
	Oct 03 18:26:50 functional-889240 kubelet[15004]: E1003 18:26:50.924684   15004 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-889240\" not found" node="functional-889240"
	Oct 03 18:26:50 functional-889240 kubelet[15004]: E1003 18:26:50.958040   15004 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:26:50 functional-889240 kubelet[15004]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:26:50 functional-889240 kubelet[15004]:  > podSandboxID="d2a1f7a262459adddcbc8998558ca80ae50f332cedd95d5813e79fa17642c365"
	Oct 03 18:26:50 functional-889240 kubelet[15004]: E1003 18:26:50.958159   15004 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:26:50 functional-889240 kubelet[15004]:         container kube-apiserver start failed in pod kube-apiserver-functional-889240_kube-system(9d9b7aefd7427246dd018814b6979298): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:26:50 functional-889240 kubelet[15004]:  > logger="UnhandledError"
	Oct 03 18:26:50 functional-889240 kubelet[15004]: E1003 18:26:50.958199   15004 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-889240" podUID="9d9b7aefd7427246dd018814b6979298"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-889240 -n functional-889240
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-889240 -n functional-889240: exit status 2 (342.232822ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-889240" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (3.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-889240 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-889240 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (52.554316ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-889240 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-889240 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-889240 describe po hello-node-connect: exit status 1 (59.056556ms)

                                                
                                                
** stderr ** 
	E1003 18:26:56.594125   61086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:56.594499   61086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:56.595903   61086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:56.596194   61086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:56.597565   61086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-889240 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-889240 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-889240 logs -l app=hello-node-connect: exit status 1 (49.08194ms)

                                                
                                                
** stderr ** 
	E1003 18:26:56.642576   61103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:56.642922   61103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:56.644327   61103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:56.644684   61103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-889240 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-889240 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-889240 describe svc hello-node-connect: exit status 1 (53.400887ms)

                                                
                                                
** stderr ** 
	E1003 18:26:56.695992   61124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:56.696363   61124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:56.697530   61124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:56.697852   61124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:56.699293   61124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-889240 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-889240
helpers_test.go:243: (dbg) docker inspect functional-889240:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a",
	        "Created": "2025-10-03T17:59:56.619817507Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 26766,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T17:59:56.652603806Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/hostname",
	        "HostsPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/hosts",
	        "LogPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a-json.log",
	        "Name": "/functional-889240",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-889240:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-889240",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a",
	                "LowerDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2-init/diff:/var/lib/docker/overlay2/6a517a7375440eba803d7b83fe1e0821915758396dd4d8556ab64fff322a60c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-889240",
	                "Source": "/var/lib/docker/volumes/functional-889240/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-889240",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-889240",
	                "name.minikube.sigs.k8s.io": "functional-889240",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "da15d31dc23bdd4694ae9e3b61015d7ce0d61668c73d3e386422834c6f0321d8",
	            "SandboxKey": "/var/run/docker/netns/da15d31dc23b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-889240": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:9e:1d:e9:d9:ce",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "03281bed183d0817c0bc237b5c25093fc10222138aedde4c7deef5823759fa24",
	                    "EndpointID": "28fa584fdd6e253816ae08a2460ef02b91085c8a7996d55008876e3bd65bbc7e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-889240",
	                        "9f4f0f10b4a9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-889240 -n functional-889240
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-889240 -n functional-889240: exit status 2 (300.312479ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 logs -n 25
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh       │ functional-889240 ssh echo hello                                                                                                  │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh       │ functional-889240 ssh sudo umount -f /mount-9p                                                                                    │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh       │ functional-889240 ssh cat /etc/hostname                                                                                           │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ tunnel    │ functional-889240 tunnel --alsologtostderr                                                                                        │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ tunnel    │ functional-889240 tunnel --alsologtostderr                                                                                        │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ mount     │ -p functional-889240 /tmp/TestFunctionalparallelMountCmdspecific-port3898317380/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ ssh       │ functional-889240 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ start     │ -p functional-889240 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                         │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ tunnel    │ functional-889240 tunnel --alsologtostderr                                                                                        │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ start     │ -p functional-889240 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                   │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ start     │ -p functional-889240 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                         │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ ssh       │ functional-889240 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ dashboard │ --url --port 36195 -p functional-889240 --alsologtostderr -v=1                                                                    │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ ssh       │ functional-889240 ssh -- ls -la /mount-9p                                                                                         │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh       │ functional-889240 ssh sudo umount -f /mount-9p                                                                                    │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ mount     │ -p functional-889240 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1298239377/001:/mount1 --alsologtostderr -v=1                │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ mount     │ -p functional-889240 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1298239377/001:/mount2 --alsologtostderr -v=1                │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ ssh       │ functional-889240 ssh findmnt -T /mount1                                                                                          │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ mount     │ -p functional-889240 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1298239377/001:/mount3 --alsologtostderr -v=1                │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ ssh       │ functional-889240 ssh findmnt -T /mount1                                                                                          │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh       │ functional-889240 ssh findmnt -T /mount2                                                                                          │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh       │ functional-889240 ssh findmnt -T /mount3                                                                                          │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ mount     │ -p functional-889240 --kill=true                                                                                                  │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ addons    │ functional-889240 addons list                                                                                                     │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ addons    │ functional-889240 addons list -o json                                                                                             │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:26:53
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:26:53.356472   58930 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:26:53.356745   58930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:26:53.356756   58930 out.go:374] Setting ErrFile to fd 2...
	I1003 18:26:53.356762   58930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:26:53.357062   58930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:26:53.357508   58930 out.go:368] Setting JSON to false
	I1003 18:26:53.358398   58930 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4164,"bootTime":1759511849,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:26:53.358491   58930 start.go:140] virtualization: kvm guest
	I1003 18:26:53.360378   58930 out.go:179] * [functional-889240] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1003 18:26:53.361688   58930 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:26:53.361693   58930 notify.go:220] Checking for updates...
	I1003 18:26:53.363055   58930 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:26:53.364385   58930 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:26:53.365536   58930 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:26:53.366672   58930 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:26:53.367760   58930 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:26:53.369355   58930 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:26:53.369795   58930 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:26:53.393358   58930 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:26:53.393501   58930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:26:53.449005   58930 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-03 18:26:53.436272745 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:26:53.449135   58930 docker.go:318] overlay module found
	I1003 18:26:53.451084   58930 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1003 18:26:53.452223   58930 start.go:304] selected driver: docker
	I1003 18:26:53.452240   58930 start.go:924] validating driver "docker" against &{Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:26:53.452344   58930 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:26:53.454148   58930 out.go:203] 
	W1003 18:26:53.455299   58930 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1003 18:26:53.456336   58930 out.go:203] 
	
	
	==> CRI-O <==
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.935572875Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.937507124Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.951683101Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=a287f5c1-738c-44f4-93cf-fa8b273170d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.953670541Z" level=info msg="createCtr: deleting container ID 94a45024cee963f25522950daa008598cc2b6a92c31321cf665c9a52bed71c52 from idIndex" id=a287f5c1-738c-44f4-93cf-fa8b273170d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.953726976Z" level=info msg="createCtr: removing container 94a45024cee963f25522950daa008598cc2b6a92c31321cf665c9a52bed71c52" id=a287f5c1-738c-44f4-93cf-fa8b273170d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.953775078Z" level=info msg="createCtr: deleting container 94a45024cee963f25522950daa008598cc2b6a92c31321cf665c9a52bed71c52 from storage" id=a287f5c1-738c-44f4-93cf-fa8b273170d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.957548935Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-889240_kube-system_9d9b7aefd7427246dd018814b6979298_0" id=a287f5c1-738c-44f4-93cf-fa8b273170d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:51 functional-889240 crio[5881]: time="2025-10-03T18:26:51.292108767Z" level=info msg="Checking image status: kicbase/echo-server:functional-889240" id=abbb2808-ed68-484b-b163-379c059f6d17 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:51 functional-889240 crio[5881]: time="2025-10-03T18:26:51.319279189Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-889240" id=d452395f-c84e-40da-918e-c48346047241 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:51 functional-889240 crio[5881]: time="2025-10-03T18:26:51.319865155Z" level=info msg="Image docker.io/kicbase/echo-server:functional-889240 not found" id=d452395f-c84e-40da-918e-c48346047241 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:51 functional-889240 crio[5881]: time="2025-10-03T18:26:51.319920152Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-889240 found" id=d452395f-c84e-40da-918e-c48346047241 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:51 functional-889240 crio[5881]: time="2025-10-03T18:26:51.352587677Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-889240" id=5d9b33fc-7d75-497a-8748-fc1b3d440fcc name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:51 functional-889240 crio[5881]: time="2025-10-03T18:26:51.352740621Z" level=info msg="Image localhost/kicbase/echo-server:functional-889240 not found" id=5d9b33fc-7d75-497a-8748-fc1b3d440fcc name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:51 functional-889240 crio[5881]: time="2025-10-03T18:26:51.352785301Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-889240 found" id=5d9b33fc-7d75-497a-8748-fc1b3d440fcc name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:54 functional-889240 crio[5881]: time="2025-10-03T18:26:54.92556282Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=1fe2569b-e9ac-4cb7-ab64-4cb8d0d9320b name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:54 functional-889240 crio[5881]: time="2025-10-03T18:26:54.926653774Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=f96ae2f5-7d10-4b2e-8d8f-a44fe68b6228 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:54 functional-889240 crio[5881]: time="2025-10-03T18:26:54.927575488Z" level=info msg="Creating container: kube-system/etcd-functional-889240/etcd" id=e98dcaeb-b052-4ab7-b331-1aeb11684dd9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:54 functional-889240 crio[5881]: time="2025-10-03T18:26:54.927804404Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:26:54 functional-889240 crio[5881]: time="2025-10-03T18:26:54.931273646Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:26:54 functional-889240 crio[5881]: time="2025-10-03T18:26:54.93187792Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:26:54 functional-889240 crio[5881]: time="2025-10-03T18:26:54.946455148Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=e98dcaeb-b052-4ab7-b331-1aeb11684dd9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:54 functional-889240 crio[5881]: time="2025-10-03T18:26:54.948020142Z" level=info msg="createCtr: deleting container ID 955faf75ca82eb4c674896136be8cd5b931155fc6813ac1099ea842155555279 from idIndex" id=e98dcaeb-b052-4ab7-b331-1aeb11684dd9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:54 functional-889240 crio[5881]: time="2025-10-03T18:26:54.948066902Z" level=info msg="createCtr: removing container 955faf75ca82eb4c674896136be8cd5b931155fc6813ac1099ea842155555279" id=e98dcaeb-b052-4ab7-b331-1aeb11684dd9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:54 functional-889240 crio[5881]: time="2025-10-03T18:26:54.948107229Z" level=info msg="createCtr: deleting container 955faf75ca82eb4c674896136be8cd5b931155fc6813ac1099ea842155555279 from storage" id=e98dcaeb-b052-4ab7-b331-1aeb11684dd9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:54 functional-889240 crio[5881]: time="2025-10-03T18:26:54.950631883Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-889240_kube-system_a73daf0147d5280c6db538ca59db9fe0_0" id=e98dcaeb-b052-4ab7-b331-1aeb11684dd9 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:26:57.610751   18136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:57.611341   18136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:57.613321   18136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:57.613835   18136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:57.615451   18136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 3 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001870] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.374530] i8042: Warning: Keylock active
	[  +0.010846] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003424] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000658] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000637] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000691] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.479345] block sda: the capability attribute has been deprecated.
	[  +0.086934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025583] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.992810] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:26:57 up  1:09,  0 user,  load average: 1.17, 0.30, 0.12
	Linux functional-889240 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 03 18:26:47 functional-889240 kubelet[15004]: E1003 18:26:47.968041   15004 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-889240" podUID="7dadd1df42d6a2c3d1907f134f7d5ea7"
	Oct 03 18:26:49 functional-889240 kubelet[15004]: E1003 18:26:49.940484   15004 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-889240\" not found"
	Oct 03 18:26:50 functional-889240 kubelet[15004]: E1003 18:26:50.548387   15004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-889240?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 03 18:26:50 functional-889240 kubelet[15004]: I1003 18:26:50.701447   15004 kubelet_node_status.go:75] "Attempting to register node" node="functional-889240"
	Oct 03 18:26:50 functional-889240 kubelet[15004]: E1003 18:26:50.702007   15004 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-889240"
	Oct 03 18:26:50 functional-889240 kubelet[15004]: E1003 18:26:50.924684   15004 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-889240\" not found" node="functional-889240"
	Oct 03 18:26:50 functional-889240 kubelet[15004]: E1003 18:26:50.958040   15004 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:26:50 functional-889240 kubelet[15004]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:26:50 functional-889240 kubelet[15004]:  > podSandboxID="d2a1f7a262459adddcbc8998558ca80ae50f332cedd95d5813e79fa17642c365"
	Oct 03 18:26:50 functional-889240 kubelet[15004]: E1003 18:26:50.958159   15004 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:26:50 functional-889240 kubelet[15004]:         container kube-apiserver start failed in pod kube-apiserver-functional-889240_kube-system(9d9b7aefd7427246dd018814b6979298): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:26:50 functional-889240 kubelet[15004]:  > logger="UnhandledError"
	Oct 03 18:26:50 functional-889240 kubelet[15004]: E1003 18:26:50.958199   15004 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-889240" podUID="9d9b7aefd7427246dd018814b6979298"
	Oct 03 18:26:54 functional-889240 kubelet[15004]: E1003 18:26:54.925106   15004 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-889240\" not found" node="functional-889240"
	Oct 03 18:26:54 functional-889240 kubelet[15004]: E1003 18:26:54.950945   15004 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:26:54 functional-889240 kubelet[15004]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:26:54 functional-889240 kubelet[15004]:  > podSandboxID="816bf4aaa4990184bdc95c0d86d21e6c5c4acf1f357b2bf3229d2f1f717980c8"
	Oct 03 18:26:54 functional-889240 kubelet[15004]: E1003 18:26:54.951078   15004 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:26:54 functional-889240 kubelet[15004]:         container etcd start failed in pod etcd-functional-889240_kube-system(a73daf0147d5280c6db538ca59db9fe0): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:26:54 functional-889240 kubelet[15004]:  > logger="UnhandledError"
	Oct 03 18:26:54 functional-889240 kubelet[15004]: E1003 18:26:54.951122   15004 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-889240" podUID="a73daf0147d5280c6db538ca59db9fe0"
	Oct 03 18:26:55 functional-889240 kubelet[15004]: E1003 18:26:55.622917   15004 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 03 18:26:56 functional-889240 kubelet[15004]: E1003 18:26:56.344945   15004 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 03 18:26:57 functional-889240 kubelet[15004]: E1003 18:26:57.312708   15004 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-889240.186b0e42e698a181  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-889240,UID:functional-889240,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-889240 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-889240,},FirstTimestamp:2025-10-03 18:22:39.917703553 +0000 UTC m=+1.131431312,LastTimestamp:2025-10-03 18:22:39.917703553 +0000 UTC m=+1.131431312,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-889240,}"
	Oct 03 18:26:57 functional-889240 kubelet[15004]: E1003 18:26:57.549874   15004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-889240?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-889240 -n functional-889240
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-889240 -n functional-889240: exit status 2 (302.382316ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-889240" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (241.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1003 18:27:00.239669   12212 retry.go:31] will retry after 5.565965199s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1003 18:27:05.806527   12212 retry.go:31] will retry after 12.663015238s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1003 18:27:18.470085   12212 retry.go:31] will retry after 12.971266695s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1003 18:27:31.441955   12212 retry.go:31] will retry after 20.488154039s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1003 18:27:51.930346   12212 retry.go:31] will retry after 35.834980632s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-889240 -n functional-889240
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-889240 -n functional-889240: exit status 2 (293.748113ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-889240" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-889240
helpers_test.go:243: (dbg) docker inspect functional-889240:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a",
	        "Created": "2025-10-03T17:59:56.619817507Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 26766,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T17:59:56.652603806Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/hostname",
	        "HostsPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/hosts",
	        "LogPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a-json.log",
	        "Name": "/functional-889240",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-889240:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-889240",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a",
	                "LowerDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2-init/diff:/var/lib/docker/overlay2/6a517a7375440eba803d7b83fe1e0821915758396dd4d8556ab64fff322a60c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-889240",
	                "Source": "/var/lib/docker/volumes/functional-889240/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-889240",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-889240",
	                "name.minikube.sigs.k8s.io": "functional-889240",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "da15d31dc23bdd4694ae9e3b61015d7ce0d61668c73d3e386422834c6f0321d8",
	            "SandboxKey": "/var/run/docker/netns/da15d31dc23b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-889240": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:9e:1d:e9:d9:ce",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "03281bed183d0817c0bc237b5c25093fc10222138aedde4c7deef5823759fa24",
	                    "EndpointID": "28fa584fdd6e253816ae08a2460ef02b91085c8a7996d55008876e3bd65bbc7e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-889240",
	                        "9f4f0f10b4a9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-889240 -n functional-889240
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-889240 -n functional-889240: exit status 2 (292.258491ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 logs -n 25
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start          │ -p functional-889240 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ ssh            │ functional-889240 ssh findmnt -T /mount-9p | grep 9p                                                               │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ dashboard      │ --url --port 36195 -p functional-889240 --alsologtostderr -v=1                                                     │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ ssh            │ functional-889240 ssh -- ls -la /mount-9p                                                                          │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh            │ functional-889240 ssh sudo umount -f /mount-9p                                                                     │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ mount          │ -p functional-889240 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1298239377/001:/mount1 --alsologtostderr -v=1 │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ mount          │ -p functional-889240 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1298239377/001:/mount2 --alsologtostderr -v=1 │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ ssh            │ functional-889240 ssh findmnt -T /mount1                                                                           │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ mount          │ -p functional-889240 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1298239377/001:/mount3 --alsologtostderr -v=1 │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ ssh            │ functional-889240 ssh findmnt -T /mount1                                                                           │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh            │ functional-889240 ssh findmnt -T /mount2                                                                           │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh            │ functional-889240 ssh findmnt -T /mount3                                                                           │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ mount          │ -p functional-889240 --kill=true                                                                                   │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ addons         │ functional-889240 addons list                                                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ addons         │ functional-889240 addons list -o json                                                                              │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ update-context │ functional-889240 update-context --alsologtostderr -v=2                                                            │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ update-context │ functional-889240 update-context --alsologtostderr -v=2                                                            │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ update-context │ functional-889240 update-context --alsologtostderr -v=2                                                            │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image          │ functional-889240 image ls --format short --alsologtostderr                                                        │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image          │ functional-889240 image ls --format yaml --alsologtostderr                                                         │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh            │ functional-889240 ssh pgrep buildkitd                                                                              │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ image          │ functional-889240 image ls --format json --alsologtostderr                                                         │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image          │ functional-889240 image ls --format table --alsologtostderr                                                        │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image          │ functional-889240 image build -t localhost/my-image:functional-889240 testdata/build --alsologtostderr             │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:27 UTC │
	│ image          │ functional-889240 image ls                                                                                         │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:27 UTC │ 03 Oct 25 18:27 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:26:53
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:26:53.356472   58930 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:26:53.356745   58930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:26:53.356756   58930 out.go:374] Setting ErrFile to fd 2...
	I1003 18:26:53.356762   58930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:26:53.357062   58930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:26:53.357508   58930 out.go:368] Setting JSON to false
	I1003 18:26:53.358398   58930 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4164,"bootTime":1759511849,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:26:53.358491   58930 start.go:140] virtualization: kvm guest
	I1003 18:26:53.360378   58930 out.go:179] * [functional-889240] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1003 18:26:53.361688   58930 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:26:53.361693   58930 notify.go:220] Checking for updates...
	I1003 18:26:53.363055   58930 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:26:53.364385   58930 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:26:53.365536   58930 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:26:53.366672   58930 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:26:53.367760   58930 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:26:53.369355   58930 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:26:53.369795   58930 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:26:53.393358   58930 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:26:53.393501   58930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:26:53.449005   58930 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-03 18:26:53.436272745 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:26:53.449135   58930 docker.go:318] overlay module found
	I1003 18:26:53.451084   58930 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1003 18:26:53.452223   58930 start.go:304] selected driver: docker
	I1003 18:26:53.452240   58930 start.go:924] validating driver "docker" against &{Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:26:53.452344   58930 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:26:53.454148   58930 out.go:203] 
	W1003 18:26:53.455299   58930 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1003 18:26:53.456336   58930 out.go:203] 
	
	
	==> CRI-O <==
	Oct 03 18:30:44 functional-889240 crio[5881]: time="2025-10-03T18:30:44.952254158Z" level=info msg="createCtr: removing container dc3144e2887d1e1aab0f0ffe53e4f917b6e9679cd5cab9124bc34f6f24f651dc" id=633de861-53b8-4738-93ad-9b36e7cd3efd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:30:44 functional-889240 crio[5881]: time="2025-10-03T18:30:44.952291024Z" level=info msg="createCtr: deleting container dc3144e2887d1e1aab0f0ffe53e4f917b6e9679cd5cab9124bc34f6f24f651dc from storage" id=633de861-53b8-4738-93ad-9b36e7cd3efd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:30:44 functional-889240 crio[5881]: time="2025-10-03T18:30:44.954388099Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-889240_kube-system_7dadd1df42d6a2c3d1907f134f7d5ea7_0" id=633de861-53b8-4738-93ad-9b36e7cd3efd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:30:45 functional-889240 crio[5881]: time="2025-10-03T18:30:45.925276413Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=b3133518-a961-4e48-980d-4089ae7c4cda name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:30:45 functional-889240 crio[5881]: time="2025-10-03T18:30:45.926176957Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=8f06a222-6935-469e-9794-edb23a64f441 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:30:45 functional-889240 crio[5881]: time="2025-10-03T18:30:45.927037311Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-889240/kube-controller-manager" id=f2873fee-c9f1-4b9b-905f-5623be8ee9b9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:30:45 functional-889240 crio[5881]: time="2025-10-03T18:30:45.927247571Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:30:45 functional-889240 crio[5881]: time="2025-10-03T18:30:45.9303988Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:30:45 functional-889240 crio[5881]: time="2025-10-03T18:30:45.930836661Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:30:45 functional-889240 crio[5881]: time="2025-10-03T18:30:45.946664784Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=f2873fee-c9f1-4b9b-905f-5623be8ee9b9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:30:45 functional-889240 crio[5881]: time="2025-10-03T18:30:45.947934444Z" level=info msg="createCtr: deleting container ID f9d3dd8a4528cd6f2f41e71028785e0f7f03a136b5a3f00d8edd3e0f8d0a853d from idIndex" id=f2873fee-c9f1-4b9b-905f-5623be8ee9b9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:30:45 functional-889240 crio[5881]: time="2025-10-03T18:30:45.947968169Z" level=info msg="createCtr: removing container f9d3dd8a4528cd6f2f41e71028785e0f7f03a136b5a3f00d8edd3e0f8d0a853d" id=f2873fee-c9f1-4b9b-905f-5623be8ee9b9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:30:45 functional-889240 crio[5881]: time="2025-10-03T18:30:45.948014541Z" level=info msg="createCtr: deleting container f9d3dd8a4528cd6f2f41e71028785e0f7f03a136b5a3f00d8edd3e0f8d0a853d from storage" id=f2873fee-c9f1-4b9b-905f-5623be8ee9b9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:30:45 functional-889240 crio[5881]: time="2025-10-03T18:30:45.950005144Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-889240_kube-system_7e715cb6024854d45a9fa99576167e43_0" id=f2873fee-c9f1-4b9b-905f-5623be8ee9b9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:30:49 functional-889240 crio[5881]: time="2025-10-03T18:30:49.925196647Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=c1bc4017-78c4-464d-8537-77fbcac56af1 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:30:49 functional-889240 crio[5881]: time="2025-10-03T18:30:49.926035224Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=d8c6c8c8-80d8-4395-97c0-31f333b08341 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:30:49 functional-889240 crio[5881]: time="2025-10-03T18:30:49.92699328Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-889240/kube-apiserver" id=b5110cd1-1e72-4f30-be22-cc1de9191248 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:30:49 functional-889240 crio[5881]: time="2025-10-03T18:30:49.927206066Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:30:49 functional-889240 crio[5881]: time="2025-10-03T18:30:49.931365124Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:30:49 functional-889240 crio[5881]: time="2025-10-03T18:30:49.931789424Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:30:49 functional-889240 crio[5881]: time="2025-10-03T18:30:49.946711355Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=b5110cd1-1e72-4f30-be22-cc1de9191248 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:30:49 functional-889240 crio[5881]: time="2025-10-03T18:30:49.948046755Z" level=info msg="createCtr: deleting container ID 7e9f754d6c49191f91adaa5492c0dcf72c697230c114f6149be23508d7c6073e from idIndex" id=b5110cd1-1e72-4f30-be22-cc1de9191248 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:30:49 functional-889240 crio[5881]: time="2025-10-03T18:30:49.948076942Z" level=info msg="createCtr: removing container 7e9f754d6c49191f91adaa5492c0dcf72c697230c114f6149be23508d7c6073e" id=b5110cd1-1e72-4f30-be22-cc1de9191248 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:30:49 functional-889240 crio[5881]: time="2025-10-03T18:30:49.948104555Z" level=info msg="createCtr: deleting container 7e9f754d6c49191f91adaa5492c0dcf72c697230c114f6149be23508d7c6073e from storage" id=b5110cd1-1e72-4f30-be22-cc1de9191248 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:30:49 functional-889240 crio[5881]: time="2025-10-03T18:30:49.950139374Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-889240_kube-system_9d9b7aefd7427246dd018814b6979298_0" id=b5110cd1-1e72-4f30-be22-cc1de9191248 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:30:52.965008   19169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:30:52.965513   19169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:30:52.967077   19169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:30:52.967484   19169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:30:52.968938   19169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 3 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001870] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.374530] i8042: Warning: Keylock active
	[  +0.010846] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003424] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000658] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000637] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000691] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.479345] block sda: the capability attribute has been deprecated.
	[  +0.086934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025583] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.992810] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:30:52 up  1:13,  0 user,  load average: 0.02, 0.14, 0.08
	Linux functional-889240 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 03 18:30:44 functional-889240 kubelet[15004]:  > podSandboxID="cc37714218db619cb7a417ce510ab6d24921b06cab2510376343b7b5c57bba9a"
	Oct 03 18:30:44 functional-889240 kubelet[15004]: E1003 18:30:44.954796   15004 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:30:44 functional-889240 kubelet[15004]:         container kube-scheduler start failed in pod kube-scheduler-functional-889240_kube-system(7dadd1df42d6a2c3d1907f134f7d5ea7): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:30:44 functional-889240 kubelet[15004]:  > logger="UnhandledError"
	Oct 03 18:30:44 functional-889240 kubelet[15004]: E1003 18:30:44.954833   15004 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-889240" podUID="7dadd1df42d6a2c3d1907f134f7d5ea7"
	Oct 03 18:30:45 functional-889240 kubelet[15004]: E1003 18:30:45.924836   15004 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-889240\" not found" node="functional-889240"
	Oct 03 18:30:45 functional-889240 kubelet[15004]: E1003 18:30:45.950264   15004 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:30:45 functional-889240 kubelet[15004]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:30:45 functional-889240 kubelet[15004]:  > podSandboxID="5afe648376bae0c19842f5a1c1151818b48e5023850d109e3400d8f2b4d7b310"
	Oct 03 18:30:45 functional-889240 kubelet[15004]: E1003 18:30:45.950349   15004 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:30:45 functional-889240 kubelet[15004]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-889240_kube-system(7e715cb6024854d45a9fa99576167e43): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:30:45 functional-889240 kubelet[15004]:  > logger="UnhandledError"
	Oct 03 18:30:45 functional-889240 kubelet[15004]: E1003 18:30:45.950375   15004 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-889240" podUID="7e715cb6024854d45a9fa99576167e43"
	Oct 03 18:30:48 functional-889240 kubelet[15004]: E1003 18:30:48.584175   15004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-889240?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 03 18:30:48 functional-889240 kubelet[15004]: I1003 18:30:48.766362   15004 kubelet_node_status.go:75] "Attempting to register node" node="functional-889240"
	Oct 03 18:30:48 functional-889240 kubelet[15004]: E1003 18:30:48.766754   15004 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-889240"
	Oct 03 18:30:49 functional-889240 kubelet[15004]: E1003 18:30:49.924768   15004 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-889240\" not found" node="functional-889240"
	Oct 03 18:30:49 functional-889240 kubelet[15004]: E1003 18:30:49.950396   15004 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:30:49 functional-889240 kubelet[15004]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:30:49 functional-889240 kubelet[15004]:  > podSandboxID="d2a1f7a262459adddcbc8998558ca80ae50f332cedd95d5813e79fa17642c365"
	Oct 03 18:30:49 functional-889240 kubelet[15004]: E1003 18:30:49.950481   15004 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:30:49 functional-889240 kubelet[15004]:         container kube-apiserver start failed in pod kube-apiserver-functional-889240_kube-system(9d9b7aefd7427246dd018814b6979298): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:30:49 functional-889240 kubelet[15004]:  > logger="UnhandledError"
	Oct 03 18:30:49 functional-889240 kubelet[15004]: E1003 18:30:49.950509   15004 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-889240" podUID="9d9b7aefd7427246dd018814b6979298"
	Oct 03 18:30:49 functional-889240 kubelet[15004]: E1003 18:30:49.958610   15004 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-889240\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-889240 -n functional-889240
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-889240 -n functional-889240: exit status 2 (290.051848ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-889240" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (241.49s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-889240 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) Non-zero exit: kubectl --context functional-889240 replace --force -f testdata/mysql.yaml: exit status 1 (45.750804ms)

                                                
                                                
** stderr ** 
	E1003 18:26:55.207360   60253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:55.207955   60253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	unable to recognize "testdata/mysql.yaml": Get "https://192.168.49.2:8441/api?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused
	unable to recognize "testdata/mysql.yaml": Get "https://192.168.49.2:8441/api?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1800: failed to kubectl replace mysql: args "kubectl --context functional-889240 replace --force -f testdata/mysql.yaml" failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-889240
helpers_test.go:243: (dbg) docker inspect functional-889240:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a",
	        "Created": "2025-10-03T17:59:56.619817507Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 26766,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T17:59:56.652603806Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/hostname",
	        "HostsPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/hosts",
	        "LogPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a-json.log",
	        "Name": "/functional-889240",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-889240:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-889240",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a",
	                "LowerDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2-init/diff:/var/lib/docker/overlay2/6a517a7375440eba803d7b83fe1e0821915758396dd4d8556ab64fff322a60c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-889240",
	                "Source": "/var/lib/docker/volumes/functional-889240/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-889240",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-889240",
	                "name.minikube.sigs.k8s.io": "functional-889240",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "da15d31dc23bdd4694ae9e3b61015d7ce0d61668c73d3e386422834c6f0321d8",
	            "SandboxKey": "/var/run/docker/netns/da15d31dc23b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-889240": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:9e:1d:e9:d9:ce",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "03281bed183d0817c0bc237b5c25093fc10222138aedde4c7deef5823759fa24",
	                    "EndpointID": "28fa584fdd6e253816ae08a2460ef02b91085c8a7996d55008876e3bd65bbc7e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-889240",
	                        "9f4f0f10b4a9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-889240 -n functional-889240
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-889240 -n functional-889240: exit status 2 (301.439345ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 logs -n 25
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image     │ functional-889240 image ls                                                                                                        │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh       │ functional-889240 ssh cat /mount-9p/test-1759516010263030140                                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image     │ functional-889240 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr         │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image     │ functional-889240 image save --daemon kicbase/echo-server:functional-889240 --alsologtostderr                                     │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh       │ functional-889240 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                  │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ ssh       │ functional-889240 ssh echo hello                                                                                                  │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh       │ functional-889240 ssh sudo umount -f /mount-9p                                                                                    │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh       │ functional-889240 ssh cat /etc/hostname                                                                                           │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ tunnel    │ functional-889240 tunnel --alsologtostderr                                                                                        │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ tunnel    │ functional-889240 tunnel --alsologtostderr                                                                                        │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ mount     │ -p functional-889240 /tmp/TestFunctionalparallelMountCmdspecific-port3898317380/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ ssh       │ functional-889240 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ start     │ -p functional-889240 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                         │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ tunnel    │ functional-889240 tunnel --alsologtostderr                                                                                        │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ start     │ -p functional-889240 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                   │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ start     │ -p functional-889240 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                         │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ ssh       │ functional-889240 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ dashboard │ --url --port 36195 -p functional-889240 --alsologtostderr -v=1                                                                    │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ ssh       │ functional-889240 ssh -- ls -la /mount-9p                                                                                         │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh       │ functional-889240 ssh sudo umount -f /mount-9p                                                                                    │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ mount     │ -p functional-889240 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1298239377/001:/mount1 --alsologtostderr -v=1                │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ mount     │ -p functional-889240 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1298239377/001:/mount2 --alsologtostderr -v=1                │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ ssh       │ functional-889240 ssh findmnt -T /mount1                                                                                          │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ mount     │ -p functional-889240 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1298239377/001:/mount3 --alsologtostderr -v=1                │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ ssh       │ functional-889240 ssh findmnt -T /mount1                                                                                          │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:26:53
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:26:53.356472   58930 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:26:53.356745   58930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:26:53.356756   58930 out.go:374] Setting ErrFile to fd 2...
	I1003 18:26:53.356762   58930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:26:53.357062   58930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:26:53.357508   58930 out.go:368] Setting JSON to false
	I1003 18:26:53.358398   58930 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4164,"bootTime":1759511849,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:26:53.358491   58930 start.go:140] virtualization: kvm guest
	I1003 18:26:53.360378   58930 out.go:179] * [functional-889240] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1003 18:26:53.361688   58930 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:26:53.361693   58930 notify.go:220] Checking for updates...
	I1003 18:26:53.363055   58930 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:26:53.364385   58930 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:26:53.365536   58930 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:26:53.366672   58930 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:26:53.367760   58930 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:26:53.369355   58930 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:26:53.369795   58930 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:26:53.393358   58930 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:26:53.393501   58930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:26:53.449005   58930 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-03 18:26:53.436272745 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:26:53.449135   58930 docker.go:318] overlay module found
	I1003 18:26:53.451084   58930 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1003 18:26:53.452223   58930 start.go:304] selected driver: docker
	I1003 18:26:53.452240   58930 start.go:924] validating driver "docker" against &{Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:26:53.452344   58930 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:26:53.454148   58930 out.go:203] 
	W1003 18:26:53.455299   58930 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1003 18:26:53.456336   58930 out.go:203] 
	
	
	==> CRI-O <==
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.935572875Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.937507124Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.951683101Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=a287f5c1-738c-44f4-93cf-fa8b273170d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.953670541Z" level=info msg="createCtr: deleting container ID 94a45024cee963f25522950daa008598cc2b6a92c31321cf665c9a52bed71c52 from idIndex" id=a287f5c1-738c-44f4-93cf-fa8b273170d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.953726976Z" level=info msg="createCtr: removing container 94a45024cee963f25522950daa008598cc2b6a92c31321cf665c9a52bed71c52" id=a287f5c1-738c-44f4-93cf-fa8b273170d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.953775078Z" level=info msg="createCtr: deleting container 94a45024cee963f25522950daa008598cc2b6a92c31321cf665c9a52bed71c52 from storage" id=a287f5c1-738c-44f4-93cf-fa8b273170d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:50 functional-889240 crio[5881]: time="2025-10-03T18:26:50.957548935Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-889240_kube-system_9d9b7aefd7427246dd018814b6979298_0" id=a287f5c1-738c-44f4-93cf-fa8b273170d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:51 functional-889240 crio[5881]: time="2025-10-03T18:26:51.292108767Z" level=info msg="Checking image status: kicbase/echo-server:functional-889240" id=abbb2808-ed68-484b-b163-379c059f6d17 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:51 functional-889240 crio[5881]: time="2025-10-03T18:26:51.319279189Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-889240" id=d452395f-c84e-40da-918e-c48346047241 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:51 functional-889240 crio[5881]: time="2025-10-03T18:26:51.319865155Z" level=info msg="Image docker.io/kicbase/echo-server:functional-889240 not found" id=d452395f-c84e-40da-918e-c48346047241 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:51 functional-889240 crio[5881]: time="2025-10-03T18:26:51.319920152Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-889240 found" id=d452395f-c84e-40da-918e-c48346047241 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:51 functional-889240 crio[5881]: time="2025-10-03T18:26:51.352587677Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-889240" id=5d9b33fc-7d75-497a-8748-fc1b3d440fcc name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:51 functional-889240 crio[5881]: time="2025-10-03T18:26:51.352740621Z" level=info msg="Image localhost/kicbase/echo-server:functional-889240 not found" id=5d9b33fc-7d75-497a-8748-fc1b3d440fcc name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:51 functional-889240 crio[5881]: time="2025-10-03T18:26:51.352785301Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-889240 found" id=5d9b33fc-7d75-497a-8748-fc1b3d440fcc name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:54 functional-889240 crio[5881]: time="2025-10-03T18:26:54.92556282Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=1fe2569b-e9ac-4cb7-ab64-4cb8d0d9320b name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:54 functional-889240 crio[5881]: time="2025-10-03T18:26:54.926653774Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=f96ae2f5-7d10-4b2e-8d8f-a44fe68b6228 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:54 functional-889240 crio[5881]: time="2025-10-03T18:26:54.927575488Z" level=info msg="Creating container: kube-system/etcd-functional-889240/etcd" id=e98dcaeb-b052-4ab7-b331-1aeb11684dd9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:54 functional-889240 crio[5881]: time="2025-10-03T18:26:54.927804404Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:26:54 functional-889240 crio[5881]: time="2025-10-03T18:26:54.931273646Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:26:54 functional-889240 crio[5881]: time="2025-10-03T18:26:54.93187792Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:26:54 functional-889240 crio[5881]: time="2025-10-03T18:26:54.946455148Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=e98dcaeb-b052-4ab7-b331-1aeb11684dd9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:54 functional-889240 crio[5881]: time="2025-10-03T18:26:54.948020142Z" level=info msg="createCtr: deleting container ID 955faf75ca82eb4c674896136be8cd5b931155fc6813ac1099ea842155555279 from idIndex" id=e98dcaeb-b052-4ab7-b331-1aeb11684dd9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:54 functional-889240 crio[5881]: time="2025-10-03T18:26:54.948066902Z" level=info msg="createCtr: removing container 955faf75ca82eb4c674896136be8cd5b931155fc6813ac1099ea842155555279" id=e98dcaeb-b052-4ab7-b331-1aeb11684dd9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:54 functional-889240 crio[5881]: time="2025-10-03T18:26:54.948107229Z" level=info msg="createCtr: deleting container 955faf75ca82eb4c674896136be8cd5b931155fc6813ac1099ea842155555279 from storage" id=e98dcaeb-b052-4ab7-b331-1aeb11684dd9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:54 functional-889240 crio[5881]: time="2025-10-03T18:26:54.950631883Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-889240_kube-system_a73daf0147d5280c6db538ca59db9fe0_0" id=e98dcaeb-b052-4ab7-b331-1aeb11684dd9 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:26:56.115080   17873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:56.115651   17873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:56.117303   17873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:56.117738   17873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:56.118919   17873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 3 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001870] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.374530] i8042: Warning: Keylock active
	[  +0.010846] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003424] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000658] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000637] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000691] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.479345] block sda: the capability attribute has been deprecated.
	[  +0.086934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025583] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.992810] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:26:56 up  1:09,  0 user,  load average: 1.17, 0.30, 0.12
	Linux functional-889240 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 03 18:26:47 functional-889240 kubelet[15004]: E1003 18:26:47.967997   15004 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:26:47 functional-889240 kubelet[15004]:         container kube-scheduler start failed in pod kube-scheduler-functional-889240_kube-system(7dadd1df42d6a2c3d1907f134f7d5ea7): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:26:47 functional-889240 kubelet[15004]:  > logger="UnhandledError"
	Oct 03 18:26:47 functional-889240 kubelet[15004]: E1003 18:26:47.968041   15004 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-889240" podUID="7dadd1df42d6a2c3d1907f134f7d5ea7"
	Oct 03 18:26:49 functional-889240 kubelet[15004]: E1003 18:26:49.940484   15004 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-889240\" not found"
	Oct 03 18:26:50 functional-889240 kubelet[15004]: E1003 18:26:50.548387   15004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-889240?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 03 18:26:50 functional-889240 kubelet[15004]: I1003 18:26:50.701447   15004 kubelet_node_status.go:75] "Attempting to register node" node="functional-889240"
	Oct 03 18:26:50 functional-889240 kubelet[15004]: E1003 18:26:50.702007   15004 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-889240"
	Oct 03 18:26:50 functional-889240 kubelet[15004]: E1003 18:26:50.924684   15004 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-889240\" not found" node="functional-889240"
	Oct 03 18:26:50 functional-889240 kubelet[15004]: E1003 18:26:50.958040   15004 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:26:50 functional-889240 kubelet[15004]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:26:50 functional-889240 kubelet[15004]:  > podSandboxID="d2a1f7a262459adddcbc8998558ca80ae50f332cedd95d5813e79fa17642c365"
	Oct 03 18:26:50 functional-889240 kubelet[15004]: E1003 18:26:50.958159   15004 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:26:50 functional-889240 kubelet[15004]:         container kube-apiserver start failed in pod kube-apiserver-functional-889240_kube-system(9d9b7aefd7427246dd018814b6979298): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:26:50 functional-889240 kubelet[15004]:  > logger="UnhandledError"
	Oct 03 18:26:50 functional-889240 kubelet[15004]: E1003 18:26:50.958199   15004 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-889240" podUID="9d9b7aefd7427246dd018814b6979298"
	Oct 03 18:26:54 functional-889240 kubelet[15004]: E1003 18:26:54.925106   15004 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-889240\" not found" node="functional-889240"
	Oct 03 18:26:54 functional-889240 kubelet[15004]: E1003 18:26:54.950945   15004 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:26:54 functional-889240 kubelet[15004]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:26:54 functional-889240 kubelet[15004]:  > podSandboxID="816bf4aaa4990184bdc95c0d86d21e6c5c4acf1f357b2bf3229d2f1f717980c8"
	Oct 03 18:26:54 functional-889240 kubelet[15004]: E1003 18:26:54.951078   15004 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:26:54 functional-889240 kubelet[15004]:         container etcd start failed in pod etcd-functional-889240_kube-system(a73daf0147d5280c6db538ca59db9fe0): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:26:54 functional-889240 kubelet[15004]:  > logger="UnhandledError"
	Oct 03 18:26:54 functional-889240 kubelet[15004]: E1003 18:26:54.951122   15004 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-889240" podUID="a73daf0147d5280c6db538ca59db9fe0"
	Oct 03 18:26:55 functional-889240 kubelet[15004]: E1003 18:26:55.622917   15004 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-889240 -n functional-889240
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-889240 -n functional-889240: exit status 2 (302.468334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-889240" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/MySQL (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-889240 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-889240 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (57.823142ms)

                                                
                                                
** stderr ** 
	E1003 18:26:47.176354   53454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:47.176816   53454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:47.177873   53454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:47.178243   53454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:47.179725   53454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-889240 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	E1003 18:26:47.176354   53454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:47.176816   53454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:47.177873   53454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:47.178243   53454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:47.179725   53454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	E1003 18:26:47.176354   53454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:47.176816   53454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:47.177873   53454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:47.178243   53454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:47.179725   53454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	E1003 18:26:47.176354   53454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:47.176816   53454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:47.177873   53454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:47.178243   53454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:47.179725   53454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	E1003 18:26:47.176354   53454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:47.176816   53454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:47.177873   53454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:47.178243   53454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:47.179725   53454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	E1003 18:26:47.176354   53454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:47.176816   53454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:47.177873   53454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:47.178243   53454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:26:47.179725   53454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-889240
helpers_test.go:243: (dbg) docker inspect functional-889240:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a",
	        "Created": "2025-10-03T17:59:56.619817507Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 26766,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T17:59:56.652603806Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/hostname",
	        "HostsPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/hosts",
	        "LogPath": "/var/lib/docker/containers/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a/9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a-json.log",
	        "Name": "/functional-889240",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-889240:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-889240",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9f4f0f10b4a905a6a72a26236b8ac0152e9494c39e1dbaac9573e24575926a0a",
	                "LowerDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2-init/diff:/var/lib/docker/overlay2/6a517a7375440eba803d7b83fe1e0821915758396dd4d8556ab64fff322a60c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/961096bc3e03412c44a9a47f92bdb9cf238c1e0524b374efccb9a50b090cd3f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-889240",
	                "Source": "/var/lib/docker/volumes/functional-889240/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-889240",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-889240",
	                "name.minikube.sigs.k8s.io": "functional-889240",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "da15d31dc23bdd4694ae9e3b61015d7ce0d61668c73d3e386422834c6f0321d8",
	            "SandboxKey": "/var/run/docker/netns/da15d31dc23b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-889240": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:9e:1d:e9:d9:ce",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "03281bed183d0817c0bc237b5c25093fc10222138aedde4c7deef5823759fa24",
	                    "EndpointID": "28fa584fdd6e253816ae08a2460ef02b91085c8a7996d55008876e3bd65bbc7e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-889240",
	                        "9f4f0f10b4a9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-889240 -n functional-889240
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-889240 -n functional-889240: exit status 2 (351.670229ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-889240 logs -n 25: (1.070430536s)
helpers_test.go:260: TestFunctional/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-889240 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                   │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                       │ minikube          │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │ 03 Oct 25 18:14 UTC │
	│ kubectl │ functional-889240 kubectl -- --context functional-889240 get pods                                                         │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │                     │
	│ start   │ -p functional-889240 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                  │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:14 UTC │                     │
	│ config  │ functional-889240 config unset cpus                                                                                       │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ cp      │ functional-889240 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                        │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ license │                                                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ service │ functional-889240 service list                                                                                            │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ config  │ functional-889240 config get cpus                                                                                         │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ config  │ functional-889240 config set cpus 2                                                                                       │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ config  │ functional-889240 config get cpus                                                                                         │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh     │ functional-889240 ssh sudo systemctl is-active docker                                                                     │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ config  │ functional-889240 config unset cpus                                                                                       │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh     │ functional-889240 ssh -n functional-889240 sudo cat /home/docker/cp-test.txt                                              │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ config  │ functional-889240 config get cpus                                                                                         │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ service │ functional-889240 service list -o json                                                                                    │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ ssh     │ functional-889240 ssh sudo systemctl is-active containerd                                                                 │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ cp      │ functional-889240 cp functional-889240:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd328528260/001/cp-test.txt │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ service │ functional-889240 service --namespace=default --https --url hello-node                                                    │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ service │ functional-889240 service hello-node --url --format={{.IP}}                                                               │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ ssh     │ functional-889240 ssh -n functional-889240 sudo cat /home/docker/cp-test.txt                                              │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image   │ functional-889240 image load --daemon kicbase/echo-server:functional-889240 --alsologtostderr                             │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ service │ functional-889240 service hello-node --url                                                                                │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ cp      │ functional-889240 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                 │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:14:28
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:14:28.726754   38063 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:14:28.726997   38063 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:14:28.727000   38063 out.go:374] Setting ErrFile to fd 2...
	I1003 18:14:28.727003   38063 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:14:28.727268   38063 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:14:28.727968   38063 out.go:368] Setting JSON to false
	I1003 18:14:28.729004   38063 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3420,"bootTime":1759511849,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:14:28.729075   38063 start.go:140] virtualization: kvm guest
	I1003 18:14:28.731008   38063 out.go:179] * [functional-889240] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 18:14:28.732488   38063 notify.go:220] Checking for updates...
	I1003 18:14:28.732492   38063 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:14:28.733579   38063 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:14:28.734939   38063 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:14:28.736179   38063 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:14:28.737411   38063 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:14:28.738587   38063 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:14:28.740087   38063 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:14:28.740180   38063 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:14:28.764594   38063 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:14:28.764685   38063 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:14:28.818292   38063 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-03 18:14:28.807876558 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:14:28.818395   38063 docker.go:318] overlay module found
	I1003 18:14:28.820263   38063 out.go:179] * Using the docker driver based on existing profile
	I1003 18:14:28.821380   38063 start.go:304] selected driver: docker
	I1003 18:14:28.821386   38063 start.go:924] validating driver "docker" against &{Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:14:28.821453   38063 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:14:28.821525   38063 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:14:28.873759   38063 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-03 18:14:28.863222744 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:14:28.874408   38063 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:14:28.874443   38063 cni.go:84] Creating CNI manager for ""
	I1003 18:14:28.874490   38063 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 18:14:28.874537   38063 start.go:348] cluster config:
	{Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:14:28.876500   38063 out.go:179] * Starting "functional-889240" primary control-plane node in "functional-889240" cluster
	I1003 18:14:28.877706   38063 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:14:28.878837   38063 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:14:28.879769   38063 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:14:28.879795   38063 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 18:14:28.879802   38063 cache.go:58] Caching tarball of preloaded images
	I1003 18:14:28.879865   38063 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:14:28.879873   38063 preload.go:233] Found /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1003 18:14:28.879879   38063 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 18:14:28.879967   38063 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/config.json ...
	I1003 18:14:28.899017   38063 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 18:14:28.899026   38063 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 18:14:28.899040   38063 cache.go:232] Successfully downloaded all kic artifacts
	I1003 18:14:28.899069   38063 start.go:360] acquireMachinesLock for functional-889240: {Name:mk6750a9fb1c1c3747b0abf2aebe2a2d0047ae3a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:14:28.899117   38063 start.go:364] duration metric: took 35.993µs to acquireMachinesLock for "functional-889240"
	I1003 18:14:28.899130   38063 start.go:96] Skipping create...Using existing machine configuration
	I1003 18:14:28.899133   38063 fix.go:54] fixHost starting: 
	I1003 18:14:28.899327   38063 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
	I1003 18:14:28.916111   38063 fix.go:112] recreateIfNeeded on functional-889240: state=Running err=<nil>
	W1003 18:14:28.916134   38063 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 18:14:28.918050   38063 out.go:252] * Updating the running docker "functional-889240" container ...
	I1003 18:14:28.918084   38063 machine.go:93] provisionDockerMachine start ...
	I1003 18:14:28.918165   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:28.934689   38063 main.go:141] libmachine: Using SSH client type: native
	I1003 18:14:28.934913   38063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:14:28.934921   38063 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 18:14:29.076697   38063 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-889240
	
	I1003 18:14:29.076727   38063 ubuntu.go:182] provisioning hostname "functional-889240"
	I1003 18:14:29.076782   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:29.092887   38063 main.go:141] libmachine: Using SSH client type: native
	I1003 18:14:29.093101   38063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:14:29.093108   38063 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-889240 && echo "functional-889240" | sudo tee /etc/hostname
	I1003 18:14:29.242886   38063 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-889240
	
	I1003 18:14:29.242996   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:29.260006   38063 main.go:141] libmachine: Using SSH client type: native
	I1003 18:14:29.260203   38063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:14:29.260220   38063 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-889240' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-889240/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-889240' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:14:29.401432   38063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:14:29.401463   38063 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8669/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8669/.minikube}
	I1003 18:14:29.401485   38063 ubuntu.go:190] setting up certificates
	I1003 18:14:29.401496   38063 provision.go:84] configureAuth start
	I1003 18:14:29.401542   38063 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-889240
	I1003 18:14:29.417679   38063 provision.go:143] copyHostCerts
	I1003 18:14:29.417732   38063 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem, removing ...
	I1003 18:14:29.417754   38063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:14:29.417818   38063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem (1082 bytes)
	I1003 18:14:29.417930   38063 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem, removing ...
	I1003 18:14:29.417934   38063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:14:29.417959   38063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem (1123 bytes)
	I1003 18:14:29.418062   38063 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem, removing ...
	I1003 18:14:29.418066   38063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:14:29.418091   38063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem (1675 bytes)
	I1003 18:14:29.418151   38063 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem org=jenkins.functional-889240 san=[127.0.0.1 192.168.49.2 functional-889240 localhost minikube]
	I1003 18:14:29.517156   38063 provision.go:177] copyRemoteCerts
	I1003 18:14:29.517211   38063 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:14:29.517244   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:29.534610   38063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:14:29.634576   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 18:14:29.651152   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1003 18:14:29.667404   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 18:14:29.683300   38063 provision.go:87] duration metric: took 281.795524ms to configureAuth
	I1003 18:14:29.683315   38063 ubuntu.go:206] setting minikube options for container-runtime
	I1003 18:14:29.683451   38063 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:14:29.683536   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:29.701238   38063 main.go:141] libmachine: Using SSH client type: native
	I1003 18:14:29.701444   38063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1003 18:14:29.701460   38063 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 18:14:29.964774   38063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 18:14:29.964789   38063 machine.go:96] duration metric: took 1.046699275s to provisionDockerMachine
	I1003 18:14:29.964799   38063 start.go:293] postStartSetup for "functional-889240" (driver="docker")
	I1003 18:14:29.964807   38063 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:14:29.964862   38063 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:14:29.964919   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:29.982141   38063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:14:30.082849   38063 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:14:30.086167   38063 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:14:30.086182   38063 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 18:14:30.086190   38063 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/addons for local assets ...
	I1003 18:14:30.086245   38063 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/files for local assets ...
	I1003 18:14:30.086322   38063 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> 122122.pem in /etc/ssl/certs
	I1003 18:14:30.086390   38063 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/test/nested/copy/12212/hosts -> hosts in /etc/test/nested/copy/12212
	I1003 18:14:30.086418   38063 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/12212
	I1003 18:14:30.093540   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:14:30.109775   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/test/nested/copy/12212/hosts --> /etc/test/nested/copy/12212/hosts (40 bytes)
	I1003 18:14:30.125563   38063 start.go:296] duration metric: took 160.752264ms for postStartSetup
	I1003 18:14:30.125613   38063 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:14:30.125652   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:30.142705   38063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:14:30.239819   38063 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:14:30.244462   38063 fix.go:56] duration metric: took 1.345323072s for fixHost
	I1003 18:14:30.244476   38063 start.go:83] releasing machines lock for "functional-889240", held for 1.345352654s
	I1003 18:14:30.244534   38063 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-889240
	I1003 18:14:30.261148   38063 ssh_runner.go:195] Run: cat /version.json
	I1003 18:14:30.261181   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:30.261277   38063 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:14:30.261317   38063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:14:30.278533   38063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:14:30.278911   38063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:14:30.374843   38063 ssh_runner.go:195] Run: systemctl --version
	I1003 18:14:30.426119   38063 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 18:14:30.460148   38063 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 18:14:30.464555   38063 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 18:14:30.464600   38063 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 18:14:30.471950   38063 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1003 18:14:30.471961   38063 start.go:495] detecting cgroup driver to use...
	I1003 18:14:30.472000   38063 detect.go:190] detected "systemd" cgroup driver on host os
	I1003 18:14:30.472044   38063 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 18:14:30.485257   38063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 18:14:30.496477   38063 docker.go:218] disabling cri-docker service (if available) ...
	I1003 18:14:30.496516   38063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 18:14:30.510101   38063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 18:14:30.521418   38063 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 18:14:30.603143   38063 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 18:14:30.686683   38063 docker.go:234] disabling docker service ...
	I1003 18:14:30.686723   38063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 18:14:30.700010   38063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 18:14:30.711397   38063 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 18:14:30.789401   38063 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 18:14:30.867745   38063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 18:14:30.879595   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:14:30.892654   38063 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 18:14:30.892698   38063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:14:30.901033   38063 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 18:14:30.901080   38063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:14:30.909297   38063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:14:30.917346   38063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:14:30.925200   38063 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:14:30.932963   38063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:14:30.941075   38063 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:14:30.948857   38063 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:14:30.956661   38063 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:14:30.963293   38063 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:14:30.969876   38063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:14:31.048833   38063 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 18:14:31.154686   38063 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 18:14:31.154732   38063 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 18:14:31.158463   38063 start.go:563] Will wait 60s for crictl version
	I1003 18:14:31.158505   38063 ssh_runner.go:195] Run: which crictl
	I1003 18:14:31.161802   38063 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 18:14:31.185028   38063 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 18:14:31.185099   38063 ssh_runner.go:195] Run: crio --version
	I1003 18:14:31.211351   38063 ssh_runner.go:195] Run: crio --version
	I1003 18:14:31.239599   38063 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 18:14:31.241121   38063 cli_runner.go:164] Run: docker network inspect functional-889240 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:14:31.257340   38063 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 18:14:31.263166   38063 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1003 18:14:31.264167   38063 kubeadm.go:883] updating cluster {Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 18:14:31.264267   38063 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:14:31.264310   38063 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:14:31.293848   38063 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:14:31.293858   38063 crio.go:433] Images already preloaded, skipping extraction
	I1003 18:14:31.293907   38063 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:14:31.319316   38063 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:14:31.319326   38063 cache_images.go:85] Images are preloaded, skipping loading
	I1003 18:14:31.319331   38063 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1003 18:14:31.319423   38063 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-889240 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 18:14:31.319482   38063 ssh_runner.go:195] Run: crio config
	I1003 18:14:31.363053   38063 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1003 18:14:31.363070   38063 cni.go:84] Creating CNI manager for ""
	I1003 18:14:31.363079   38063 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 18:14:31.363097   38063 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 18:14:31.363115   38063 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-889240 NodeName:functional-889240 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 18:14:31.363211   38063 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-889240"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:14:31.363260   38063 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 18:14:31.371060   38063 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:14:31.371113   38063 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 18:14:31.378260   38063 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1003 18:14:31.389622   38063 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 18:14:31.401169   38063 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1003 18:14:31.413278   38063 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1003 18:14:31.416670   38063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:14:31.493997   38063 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:14:31.506325   38063 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240 for IP: 192.168.49.2
	I1003 18:14:31.506337   38063 certs.go:195] generating shared ca certs ...
	I1003 18:14:31.506355   38063 certs.go:227] acquiring lock for ca certs: {Name:mk92d1e8e469cb44d9924ff8abf5ecf0a8ce4e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:14:31.506504   38063 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key
	I1003 18:14:31.506539   38063 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key
	I1003 18:14:31.506544   38063 certs.go:257] generating profile certs ...
	I1003 18:14:31.506611   38063 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.key
	I1003 18:14:31.506654   38063 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.key.eb3f8f7c
	I1003 18:14:31.506684   38063 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.key
	I1003 18:14:31.506800   38063 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem (1338 bytes)
	W1003 18:14:31.506838   38063 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212_empty.pem, impossibly tiny 0 bytes
	I1003 18:14:31.506844   38063 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:14:31.506863   38063 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:14:31.506885   38063 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:14:31.506914   38063 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem (1675 bytes)
	I1003 18:14:31.506949   38063 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:14:31.507555   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:14:31.523949   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 18:14:31.540075   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:14:31.556229   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:14:31.572472   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 18:14:31.588618   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:14:31.604606   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:14:31.620082   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 18:14:31.636014   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:14:31.652102   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem --> /usr/share/ca-certificates/12212.pem (1338 bytes)
	I1003 18:14:31.668081   38063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /usr/share/ca-certificates/122122.pem (1708 bytes)
	I1003 18:14:31.684503   38063 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:14:31.696104   38063 ssh_runner.go:195] Run: openssl version
	I1003 18:14:31.701806   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:14:31.709474   38063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:14:31.712729   38063 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:14:31.712776   38063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:14:31.746262   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:14:31.754238   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12212.pem && ln -fs /usr/share/ca-certificates/12212.pem /etc/ssl/certs/12212.pem"
	I1003 18:14:31.762041   38063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12212.pem
	I1003 18:14:31.765354   38063 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:14:31.765385   38063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12212.pem
	I1003 18:14:31.799341   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12212.pem /etc/ssl/certs/51391683.0"
	I1003 18:14:31.807532   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122122.pem && ln -fs /usr/share/ca-certificates/122122.pem /etc/ssl/certs/122122.pem"
	I1003 18:14:31.815668   38063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122122.pem
	I1003 18:14:31.819149   38063 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:14:31.819195   38063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122122.pem
	I1003 18:14:31.853378   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122122.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:14:31.861557   38063 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:14:31.865026   38063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 18:14:31.898216   38063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 18:14:31.931439   38063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 18:14:31.964848   38063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 18:14:31.997996   38063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 18:14:32.031331   38063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 18:14:32.064773   38063 kubeadm.go:400] StartCluster: {Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:14:32.064844   38063 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:14:32.064884   38063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:14:32.091563   38063 cri.go:89] found id: ""
	I1003 18:14:32.091628   38063 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:14:32.099575   38063 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1003 18:14:32.099617   38063 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1003 18:14:32.099649   38063 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 18:14:32.106476   38063 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:14:32.106922   38063 kubeconfig.go:125] found "functional-889240" server: "https://192.168.49.2:8441"
	I1003 18:14:32.108169   38063 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 18:14:32.115724   38063 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-03 18:00:01.716218369 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-03 18:14:31.411258298 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1003 18:14:32.115731   38063 kubeadm.go:1160] stopping kube-system containers ...
	I1003 18:14:32.115740   38063 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1003 18:14:32.115779   38063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:14:32.142745   38063 cri.go:89] found id: ""
	I1003 18:14:32.142803   38063 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1003 18:14:32.181602   38063 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:14:32.189432   38063 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct  3 18:04 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Oct  3 18:04 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Oct  3 18:04 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct  3 18:04 /etc/kubernetes/scheduler.conf
	
	I1003 18:14:32.189481   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1003 18:14:32.196894   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1003 18:14:32.203921   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:14:32.203965   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:14:32.210881   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1003 18:14:32.217766   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:14:32.217803   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:14:32.224334   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1003 18:14:32.231030   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:14:32.231065   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:14:32.237472   38063 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 18:14:32.244457   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1003 18:14:32.283268   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1003 18:14:33.742947   38063 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.459652347s)
	I1003 18:14:33.743017   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1003 18:14:33.898116   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1003 18:14:33.942573   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1003 18:14:33.988522   38063 api_server.go:52] waiting for apiserver process to appear ...
	I1003 18:14:33.988576   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:34.488790   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:34.989160   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:35.489680   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:35.988868   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:36.488719   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:36.989189   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:37.488931   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:37.988689   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:38.489192   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:38.988747   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:39.488853   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:39.988726   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:40.488885   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:40.988836   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:41.489087   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:41.989102   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:42.489308   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:42.989350   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:43.489437   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:43.989370   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:44.489479   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:44.989473   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:45.489475   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:45.989163   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:46.489071   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:46.989061   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:47.489362   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:47.989160   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:48.489058   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:48.989044   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:49.489308   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:49.989261   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:50.489305   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:50.989055   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:51.488843   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:51.989620   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:52.489351   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:52.989238   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:53.489255   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:53.989220   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:54.488852   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:54.988693   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:55.488676   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:55.989529   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:56.488743   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:56.988770   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:57.489696   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:57.989499   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:58.489418   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:58.988677   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:59.488958   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:14:59.988929   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:00.488655   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:00.989293   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:01.489448   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:01.989466   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:02.489205   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:02.989600   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:03.489423   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:03.989351   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:04.489050   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:04.989610   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:05.489685   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:05.988959   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:06.488882   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:06.988912   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:07.488777   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:07.988801   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:08.489543   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:08.989468   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:09.489298   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:09.989123   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:10.489003   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:10.988801   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:11.489568   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:11.989184   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:12.489371   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:12.989143   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:13.488941   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:13.988874   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:14.489673   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:14.989633   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:15.489486   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:15.989281   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:16.489642   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:16.989478   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:17.489111   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:17.989045   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:18.488802   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:18.988734   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:19.489569   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:19.989541   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:20.488747   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:20.989602   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:21.488839   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:21.989691   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:22.489669   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:22.989667   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:23.489632   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:23.989542   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:24.489501   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:24.989204   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:25.488757   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:25.989320   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:26.489097   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:26.988902   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:27.489585   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:27.989335   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:28.489024   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:28.988936   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:29.488782   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:29.989706   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:30.489391   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:30.989093   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:31.488928   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:31.988795   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:32.488796   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:32.988671   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:33.489525   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:33.989163   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:33.989216   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:34.014490   38063 cri.go:89] found id: ""
	I1003 18:15:34.014506   38063 logs.go:282] 0 containers: []
	W1003 18:15:34.014513   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:34.014518   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:34.014556   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:34.039203   38063 cri.go:89] found id: ""
	I1003 18:15:34.039217   38063 logs.go:282] 0 containers: []
	W1003 18:15:34.039223   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:34.039227   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:34.039266   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:34.064423   38063 cri.go:89] found id: ""
	I1003 18:15:34.064440   38063 logs.go:282] 0 containers: []
	W1003 18:15:34.064448   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:34.064452   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:34.064494   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:34.089636   38063 cri.go:89] found id: ""
	I1003 18:15:34.089650   38063 logs.go:282] 0 containers: []
	W1003 18:15:34.089661   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:34.089665   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:34.089707   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:34.114198   38063 cri.go:89] found id: ""
	I1003 18:15:34.114211   38063 logs.go:282] 0 containers: []
	W1003 18:15:34.114217   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:34.114221   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:34.114261   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:34.138167   38063 cri.go:89] found id: ""
	I1003 18:15:34.138180   38063 logs.go:282] 0 containers: []
	W1003 18:15:34.138186   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:34.138190   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:34.138234   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:34.163057   38063 cri.go:89] found id: ""
	I1003 18:15:34.163071   38063 logs.go:282] 0 containers: []
	W1003 18:15:34.163079   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:34.163090   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:34.163102   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:34.230868   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:34.230885   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:34.242117   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:34.242134   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:34.296197   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:34.289745    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:34.290228    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:34.291731    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:34.292260    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:34.293746    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:34.289745    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:34.290228    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:34.291731    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:34.292260    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:34.293746    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:15:34.296208   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:34.296218   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:34.353696   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:34.353715   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:36.882850   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:36.893827   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:36.893878   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:36.918928   38063 cri.go:89] found id: ""
	I1003 18:15:36.918945   38063 logs.go:282] 0 containers: []
	W1003 18:15:36.918954   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:36.918960   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:36.919024   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:36.943500   38063 cri.go:89] found id: ""
	I1003 18:15:36.943516   38063 logs.go:282] 0 containers: []
	W1003 18:15:36.943524   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:36.943529   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:36.943571   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:36.967892   38063 cri.go:89] found id: ""
	I1003 18:15:36.967909   38063 logs.go:282] 0 containers: []
	W1003 18:15:36.967917   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:36.967921   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:36.967961   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:36.992302   38063 cri.go:89] found id: ""
	I1003 18:15:36.992316   38063 logs.go:282] 0 containers: []
	W1003 18:15:36.992322   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:36.992326   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:36.992371   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:37.017414   38063 cri.go:89] found id: ""
	I1003 18:15:37.017429   38063 logs.go:282] 0 containers: []
	W1003 18:15:37.017435   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:37.017440   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:37.017483   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:37.042577   38063 cri.go:89] found id: ""
	I1003 18:15:37.042596   38063 logs.go:282] 0 containers: []
	W1003 18:15:37.042601   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:37.042606   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:37.042652   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:37.067424   38063 cri.go:89] found id: ""
	I1003 18:15:37.067438   38063 logs.go:282] 0 containers: []
	W1003 18:15:37.067444   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:37.067451   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:37.067466   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:37.133058   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:37.133076   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:37.144095   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:37.144109   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:37.201432   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:37.195051    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:37.195552    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:37.197089    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:37.197493    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:37.198600    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:37.195051    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:37.195552    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:37.197089    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:37.197493    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:37.198600    6882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:15:37.201453   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:37.201464   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:37.264020   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:37.264041   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:39.793917   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:39.804160   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:39.804201   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:39.828532   38063 cri.go:89] found id: ""
	I1003 18:15:39.828545   38063 logs.go:282] 0 containers: []
	W1003 18:15:39.828551   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:39.828557   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:39.828595   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:39.854181   38063 cri.go:89] found id: ""
	I1003 18:15:39.854194   38063 logs.go:282] 0 containers: []
	W1003 18:15:39.854199   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:39.854203   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:39.854241   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:39.878636   38063 cri.go:89] found id: ""
	I1003 18:15:39.878649   38063 logs.go:282] 0 containers: []
	W1003 18:15:39.878655   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:39.878665   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:39.878714   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:39.903647   38063 cri.go:89] found id: ""
	I1003 18:15:39.903662   38063 logs.go:282] 0 containers: []
	W1003 18:15:39.903672   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:39.903678   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:39.903727   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:39.928358   38063 cri.go:89] found id: ""
	I1003 18:15:39.928371   38063 logs.go:282] 0 containers: []
	W1003 18:15:39.928377   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:39.928382   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:39.928425   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:39.952698   38063 cri.go:89] found id: ""
	I1003 18:15:39.952712   38063 logs.go:282] 0 containers: []
	W1003 18:15:39.952718   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:39.952722   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:39.952770   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:39.977762   38063 cri.go:89] found id: ""
	I1003 18:15:39.977779   38063 logs.go:282] 0 containers: []
	W1003 18:15:39.977788   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:39.977798   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:39.977810   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:40.047503   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:40.047521   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:40.058597   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:40.058612   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:40.113456   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:40.107101    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:40.107593    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:40.109120    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:40.109527    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:40.111020    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:40.107101    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:40.107593    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:40.109120    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:40.109527    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:40.111020    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:15:40.113474   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:40.113485   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:40.173884   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:40.173904   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:42.702098   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:42.712135   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:42.712176   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:42.735423   38063 cri.go:89] found id: ""
	I1003 18:15:42.735438   38063 logs.go:282] 0 containers: []
	W1003 18:15:42.735445   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:42.735450   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:42.735502   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:42.758834   38063 cri.go:89] found id: ""
	I1003 18:15:42.758847   38063 logs.go:282] 0 containers: []
	W1003 18:15:42.758853   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:42.758857   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:42.758918   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:42.782548   38063 cri.go:89] found id: ""
	I1003 18:15:42.782564   38063 logs.go:282] 0 containers: []
	W1003 18:15:42.782573   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:42.782578   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:42.782631   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:42.808289   38063 cri.go:89] found id: ""
	I1003 18:15:42.808307   38063 logs.go:282] 0 containers: []
	W1003 18:15:42.808315   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:42.808321   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:42.808362   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:42.832106   38063 cri.go:89] found id: ""
	I1003 18:15:42.832120   38063 logs.go:282] 0 containers: []
	W1003 18:15:42.832126   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:42.832136   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:42.832178   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:42.856681   38063 cri.go:89] found id: ""
	I1003 18:15:42.856697   38063 logs.go:282] 0 containers: []
	W1003 18:15:42.856704   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:42.856708   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:42.856753   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:42.880778   38063 cri.go:89] found id: ""
	I1003 18:15:42.880793   38063 logs.go:282] 0 containers: []
	W1003 18:15:42.880799   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:42.880806   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:42.880815   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:42.891568   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:42.891591   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:42.944856   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:42.938479    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:42.938960    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:42.940463    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:42.940834    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:42.942358    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:42.938479    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:42.938960    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:42.940463    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:42.940834    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:42.942358    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:15:42.944869   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:42.944883   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:43.008325   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:43.008342   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:43.034919   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:43.034934   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:45.601892   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:45.612293   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:45.612337   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:45.636800   38063 cri.go:89] found id: ""
	I1003 18:15:45.636816   38063 logs.go:282] 0 containers: []
	W1003 18:15:45.636825   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:45.636831   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:45.636897   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:45.663419   38063 cri.go:89] found id: ""
	I1003 18:15:45.663431   38063 logs.go:282] 0 containers: []
	W1003 18:15:45.663442   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:45.663446   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:45.663484   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:45.688326   38063 cri.go:89] found id: ""
	I1003 18:15:45.688340   38063 logs.go:282] 0 containers: []
	W1003 18:15:45.688346   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:45.688350   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:45.688390   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:45.713903   38063 cri.go:89] found id: ""
	I1003 18:15:45.713916   38063 logs.go:282] 0 containers: []
	W1003 18:15:45.713923   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:45.713929   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:45.713969   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:45.738540   38063 cri.go:89] found id: ""
	I1003 18:15:45.738554   38063 logs.go:282] 0 containers: []
	W1003 18:15:45.738560   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:45.738565   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:45.738626   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:45.763029   38063 cri.go:89] found id: ""
	I1003 18:15:45.763042   38063 logs.go:282] 0 containers: []
	W1003 18:15:45.763049   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:45.763054   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:45.763105   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:45.787593   38063 cri.go:89] found id: ""
	I1003 18:15:45.787605   38063 logs.go:282] 0 containers: []
	W1003 18:15:45.787613   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:45.787619   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:45.787628   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:45.814410   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:45.814426   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:45.879690   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:45.879708   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:45.890632   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:45.890646   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:45.945900   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:45.939503    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:45.940097    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:45.941591    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:45.942022    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:45.943469    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:45.939503    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:45.940097    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:45.941591    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:45.942022    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:45.943469    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:15:45.945911   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:45.945920   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:48.510685   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:48.520989   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:48.521030   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:48.545850   38063 cri.go:89] found id: ""
	I1003 18:15:48.545863   38063 logs.go:282] 0 containers: []
	W1003 18:15:48.545871   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:48.545875   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:48.545917   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:48.570678   38063 cri.go:89] found id: ""
	I1003 18:15:48.570691   38063 logs.go:282] 0 containers: []
	W1003 18:15:48.570699   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:48.570704   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:48.570758   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:48.594906   38063 cri.go:89] found id: ""
	I1003 18:15:48.594922   38063 logs.go:282] 0 containers: []
	W1003 18:15:48.594931   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:48.594936   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:48.595011   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:48.620934   38063 cri.go:89] found id: ""
	I1003 18:15:48.620951   38063 logs.go:282] 0 containers: []
	W1003 18:15:48.620958   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:48.620963   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:48.621033   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:48.645916   38063 cri.go:89] found id: ""
	I1003 18:15:48.645933   38063 logs.go:282] 0 containers: []
	W1003 18:15:48.645942   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:48.645947   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:48.646009   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:48.670919   38063 cri.go:89] found id: ""
	I1003 18:15:48.670932   38063 logs.go:282] 0 containers: []
	W1003 18:15:48.670939   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:48.670944   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:48.671004   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:48.695257   38063 cri.go:89] found id: ""
	I1003 18:15:48.695274   38063 logs.go:282] 0 containers: []
	W1003 18:15:48.695281   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:48.695289   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:48.695298   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:48.723183   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:48.723198   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:48.790906   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:48.790924   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:48.802517   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:48.802531   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:48.858274   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:48.851795    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:48.852286    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:48.853794    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:48.854187    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:48.855729    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:48.851795    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:48.852286    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:48.853794    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:48.854187    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:48.855729    7397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:15:48.858294   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:48.858309   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:51.418365   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:51.428790   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:51.428851   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:51.453214   38063 cri.go:89] found id: ""
	I1003 18:15:51.453228   38063 logs.go:282] 0 containers: []
	W1003 18:15:51.453235   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:51.453241   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:51.453302   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:51.478216   38063 cri.go:89] found id: ""
	I1003 18:15:51.478231   38063 logs.go:282] 0 containers: []
	W1003 18:15:51.478241   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:51.478247   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:51.478298   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:51.503301   38063 cri.go:89] found id: ""
	I1003 18:15:51.503316   38063 logs.go:282] 0 containers: []
	W1003 18:15:51.503322   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:51.503327   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:51.503368   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:51.528130   38063 cri.go:89] found id: ""
	I1003 18:15:51.528146   38063 logs.go:282] 0 containers: []
	W1003 18:15:51.528152   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:51.528157   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:51.528196   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:51.553046   38063 cri.go:89] found id: ""
	I1003 18:15:51.553076   38063 logs.go:282] 0 containers: []
	W1003 18:15:51.553084   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:51.553091   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:51.553133   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:51.577406   38063 cri.go:89] found id: ""
	I1003 18:15:51.577420   38063 logs.go:282] 0 containers: []
	W1003 18:15:51.577426   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:51.577432   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:51.577471   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:51.602068   38063 cri.go:89] found id: ""
	I1003 18:15:51.602084   38063 logs.go:282] 0 containers: []
	W1003 18:15:51.602092   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:51.602102   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:51.602114   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:51.629035   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:51.629051   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:51.697997   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:51.698016   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:51.710748   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:51.710769   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:51.764330   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:51.757745    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:51.758298    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:51.759850    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:51.760310    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:51.761740    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:51.757745    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:51.758298    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:51.759850    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:51.760310    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:51.761740    7526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:15:51.764338   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:51.764348   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:54.323078   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:54.333510   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:54.333559   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:54.357777   38063 cri.go:89] found id: ""
	I1003 18:15:54.357790   38063 logs.go:282] 0 containers: []
	W1003 18:15:54.357796   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:54.357800   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:54.357841   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:54.381421   38063 cri.go:89] found id: ""
	I1003 18:15:54.381435   38063 logs.go:282] 0 containers: []
	W1003 18:15:54.381442   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:54.381447   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:54.381495   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:54.404951   38063 cri.go:89] found id: ""
	I1003 18:15:54.404969   38063 logs.go:282] 0 containers: []
	W1003 18:15:54.404991   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:54.404999   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:54.405045   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:54.429154   38063 cri.go:89] found id: ""
	I1003 18:15:54.429172   38063 logs.go:282] 0 containers: []
	W1003 18:15:54.429181   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:54.429186   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:54.429224   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:54.452874   38063 cri.go:89] found id: ""
	I1003 18:15:54.452895   38063 logs.go:282] 0 containers: []
	W1003 18:15:54.452903   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:54.452907   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:54.452946   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:54.477916   38063 cri.go:89] found id: ""
	I1003 18:15:54.477929   38063 logs.go:282] 0 containers: []
	W1003 18:15:54.477937   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:54.477942   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:54.478001   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:54.503676   38063 cri.go:89] found id: ""
	I1003 18:15:54.503692   38063 logs.go:282] 0 containers: []
	W1003 18:15:54.503699   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:54.503706   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:54.503716   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:54.571451   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:54.571469   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:54.582598   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:54.582614   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:54.635288   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:54.629106    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:54.629524    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:54.631026    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:54.631408    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:54.632845    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:54.629106    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:54.629524    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:54.631026    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:54.631408    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:54.632845    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:15:54.635301   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:54.635338   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:54.693328   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:54.693348   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:57.224616   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:15:57.234873   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:15:57.234916   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:15:57.259150   38063 cri.go:89] found id: ""
	I1003 18:15:57.259164   38063 logs.go:282] 0 containers: []
	W1003 18:15:57.259170   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:15:57.259175   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:15:57.259224   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:15:57.282636   38063 cri.go:89] found id: ""
	I1003 18:15:57.282650   38063 logs.go:282] 0 containers: []
	W1003 18:15:57.282662   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:15:57.282667   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:15:57.282716   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:15:57.307774   38063 cri.go:89] found id: ""
	I1003 18:15:57.307792   38063 logs.go:282] 0 containers: []
	W1003 18:15:57.307800   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:15:57.307806   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:15:57.307846   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:15:57.331087   38063 cri.go:89] found id: ""
	I1003 18:15:57.331101   38063 logs.go:282] 0 containers: []
	W1003 18:15:57.331107   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:15:57.331112   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:15:57.331153   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:15:57.356108   38063 cri.go:89] found id: ""
	I1003 18:15:57.356125   38063 logs.go:282] 0 containers: []
	W1003 18:15:57.356200   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:15:57.356209   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:15:57.356267   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:15:57.381138   38063 cri.go:89] found id: ""
	I1003 18:15:57.381154   38063 logs.go:282] 0 containers: []
	W1003 18:15:57.381161   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:15:57.381166   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:15:57.381206   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:15:57.405322   38063 cri.go:89] found id: ""
	I1003 18:15:57.405339   38063 logs.go:282] 0 containers: []
	W1003 18:15:57.405345   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:15:57.405353   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:15:57.405362   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:15:57.463330   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:15:57.463345   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:15:57.491754   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:15:57.491771   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:15:57.557710   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:15:57.557727   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:15:57.569135   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:15:57.569150   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:15:57.622275   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:15:57.615880    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:57.616369    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:57.617874    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:57.618325    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:57.619768    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:15:57.615880    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:57.616369    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:57.617874    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:57.618325    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:15:57.619768    7776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:00.123157   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:00.133350   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:00.133393   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:00.157946   38063 cri.go:89] found id: ""
	I1003 18:16:00.157958   38063 logs.go:282] 0 containers: []
	W1003 18:16:00.157965   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:00.157970   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:00.158035   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:00.182943   38063 cri.go:89] found id: ""
	I1003 18:16:00.182956   38063 logs.go:282] 0 containers: []
	W1003 18:16:00.182962   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:00.182967   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:00.183026   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:00.206834   38063 cri.go:89] found id: ""
	I1003 18:16:00.206848   38063 logs.go:282] 0 containers: []
	W1003 18:16:00.206854   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:00.206858   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:00.206901   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:00.231944   38063 cri.go:89] found id: ""
	I1003 18:16:00.231959   38063 logs.go:282] 0 containers: []
	W1003 18:16:00.231965   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:00.231970   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:00.232027   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:00.257587   38063 cri.go:89] found id: ""
	I1003 18:16:00.257607   38063 logs.go:282] 0 containers: []
	W1003 18:16:00.257613   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:00.257619   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:00.257662   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:00.281667   38063 cri.go:89] found id: ""
	I1003 18:16:00.281683   38063 logs.go:282] 0 containers: []
	W1003 18:16:00.281690   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:00.281694   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:00.281735   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:00.306161   38063 cri.go:89] found id: ""
	I1003 18:16:00.306173   38063 logs.go:282] 0 containers: []
	W1003 18:16:00.306183   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:00.306189   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:00.306199   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:00.334078   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:00.334094   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:00.398782   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:00.398800   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:00.410100   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:00.410118   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:00.464563   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:00.458004    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:00.458485    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:00.459956    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:00.460373    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:00.461844    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:00.458004    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:00.458485    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:00.459956    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:00.460373    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:00.461844    7894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:00.464573   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:00.464584   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:03.025201   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:03.035449   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:03.035489   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:03.060615   38063 cri.go:89] found id: ""
	I1003 18:16:03.060629   38063 logs.go:282] 0 containers: []
	W1003 18:16:03.060638   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:03.060644   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:03.060695   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:03.085028   38063 cri.go:89] found id: ""
	I1003 18:16:03.085041   38063 logs.go:282] 0 containers: []
	W1003 18:16:03.085047   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:03.085052   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:03.085101   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:03.109281   38063 cri.go:89] found id: ""
	I1003 18:16:03.109295   38063 logs.go:282] 0 containers: []
	W1003 18:16:03.109301   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:03.109306   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:03.109343   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:03.133199   38063 cri.go:89] found id: ""
	I1003 18:16:03.133212   38063 logs.go:282] 0 containers: []
	W1003 18:16:03.133218   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:03.133223   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:03.133271   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:03.157142   38063 cri.go:89] found id: ""
	I1003 18:16:03.157158   38063 logs.go:282] 0 containers: []
	W1003 18:16:03.157167   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:03.157174   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:03.157215   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:03.181156   38063 cri.go:89] found id: ""
	I1003 18:16:03.181170   38063 logs.go:282] 0 containers: []
	W1003 18:16:03.181177   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:03.181182   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:03.181225   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:03.207371   38063 cri.go:89] found id: ""
	I1003 18:16:03.207385   38063 logs.go:282] 0 containers: []
	W1003 18:16:03.207392   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:03.207399   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:03.207407   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:03.268072   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:03.268093   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:03.295655   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:03.295675   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:03.359095   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:03.359116   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:03.370093   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:03.370110   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:03.423681   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:03.416458    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:03.416947    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:03.419089    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:03.419495    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:03.421012    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:03.416458    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:03.416947    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:03.419089    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:03.419495    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:03.421012    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:05.925327   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:05.935882   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:05.935927   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:05.960833   38063 cri.go:89] found id: ""
	I1003 18:16:05.960850   38063 logs.go:282] 0 containers: []
	W1003 18:16:05.960858   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:05.960864   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:05.960918   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:05.985562   38063 cri.go:89] found id: ""
	I1003 18:16:05.985577   38063 logs.go:282] 0 containers: []
	W1003 18:16:05.985585   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:05.985592   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:05.985644   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:06.008796   38063 cri.go:89] found id: ""
	I1003 18:16:06.008813   38063 logs.go:282] 0 containers: []
	W1003 18:16:06.008822   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:06.008827   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:06.008865   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:06.034023   38063 cri.go:89] found id: ""
	I1003 18:16:06.034037   38063 logs.go:282] 0 containers: []
	W1003 18:16:06.034043   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:06.034048   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:06.034099   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:06.057314   38063 cri.go:89] found id: ""
	I1003 18:16:06.057330   38063 logs.go:282] 0 containers: []
	W1003 18:16:06.057340   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:06.057347   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:06.057396   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:06.082843   38063 cri.go:89] found id: ""
	I1003 18:16:06.082859   38063 logs.go:282] 0 containers: []
	W1003 18:16:06.082865   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:06.082870   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:06.082921   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:06.106237   38063 cri.go:89] found id: ""
	I1003 18:16:06.106251   38063 logs.go:282] 0 containers: []
	W1003 18:16:06.106257   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:06.106264   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:06.106276   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:06.175390   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:06.175407   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:06.186550   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:06.186565   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:06.239490   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:06.233165    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:06.233624    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:06.235128    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:06.235537    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:06.237048    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:06.233165    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:06.233624    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:06.235128    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:06.235537    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:06.237048    8129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:06.239500   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:06.239513   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:06.301454   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:06.301474   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:08.830757   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:08.841156   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:08.841199   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:08.865562   38063 cri.go:89] found id: ""
	I1003 18:16:08.865578   38063 logs.go:282] 0 containers: []
	W1003 18:16:08.865584   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:08.865589   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:08.865636   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:08.889510   38063 cri.go:89] found id: ""
	I1003 18:16:08.889527   38063 logs.go:282] 0 containers: []
	W1003 18:16:08.889536   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:08.889543   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:08.889588   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:08.914125   38063 cri.go:89] found id: ""
	I1003 18:16:08.914140   38063 logs.go:282] 0 containers: []
	W1003 18:16:08.914146   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:08.914150   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:08.914195   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:08.937681   38063 cri.go:89] found id: ""
	I1003 18:16:08.937697   38063 logs.go:282] 0 containers: []
	W1003 18:16:08.937706   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:08.937711   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:08.937752   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:08.961970   38063 cri.go:89] found id: ""
	I1003 18:16:08.961998   38063 logs.go:282] 0 containers: []
	W1003 18:16:08.962006   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:08.962012   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:08.962073   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:08.986853   38063 cri.go:89] found id: ""
	I1003 18:16:08.986870   38063 logs.go:282] 0 containers: []
	W1003 18:16:08.986877   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:08.986883   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:08.986953   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:09.012531   38063 cri.go:89] found id: ""
	I1003 18:16:09.012547   38063 logs.go:282] 0 containers: []
	W1003 18:16:09.012555   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:09.012570   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:09.012581   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:09.078036   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:09.078053   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:09.088904   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:09.088918   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:09.143252   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:09.136367    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:09.136907    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:09.138514    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:09.139001    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:09.140648    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:09.136367    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:09.136907    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:09.138514    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:09.139001    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:09.140648    8245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:09.143263   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:09.143275   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:09.201869   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:09.201887   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:11.730105   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:11.740344   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:11.740384   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:11.765234   38063 cri.go:89] found id: ""
	I1003 18:16:11.765247   38063 logs.go:282] 0 containers: []
	W1003 18:16:11.765256   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:11.765261   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:11.765318   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:11.789130   38063 cri.go:89] found id: ""
	I1003 18:16:11.789143   38063 logs.go:282] 0 containers: []
	W1003 18:16:11.789149   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:11.789154   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:11.789198   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:11.815036   38063 cri.go:89] found id: ""
	I1003 18:16:11.815050   38063 logs.go:282] 0 containers: []
	W1003 18:16:11.815058   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:11.815064   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:11.815113   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:11.839467   38063 cri.go:89] found id: ""
	I1003 18:16:11.839483   38063 logs.go:282] 0 containers: []
	W1003 18:16:11.839490   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:11.839495   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:11.839539   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:11.863864   38063 cri.go:89] found id: ""
	I1003 18:16:11.863893   38063 logs.go:282] 0 containers: []
	W1003 18:16:11.863899   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:11.863904   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:11.863955   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:11.889464   38063 cri.go:89] found id: ""
	I1003 18:16:11.889480   38063 logs.go:282] 0 containers: []
	W1003 18:16:11.889488   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:11.889495   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:11.889535   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:11.912845   38063 cri.go:89] found id: ""
	I1003 18:16:11.912862   38063 logs.go:282] 0 containers: []
	W1003 18:16:11.912870   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:11.912880   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:11.912904   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:11.966773   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:11.959444    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:11.960161    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:11.961014    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:11.962530    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:11.962898    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:11.959444    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:11.960161    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:11.961014    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:11.962530    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:11.962898    8360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:11.966785   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:11.966795   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:12.025128   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:12.025146   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:12.053945   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:12.053960   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:12.119420   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:12.119438   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:14.631092   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:14.641283   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:14.641330   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:14.665808   38063 cri.go:89] found id: ""
	I1003 18:16:14.665821   38063 logs.go:282] 0 containers: []
	W1003 18:16:14.665827   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:14.665832   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:14.665874   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:14.690191   38063 cri.go:89] found id: ""
	I1003 18:16:14.690204   38063 logs.go:282] 0 containers: []
	W1003 18:16:14.690211   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:14.690216   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:14.690266   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:14.715586   38063 cri.go:89] found id: ""
	I1003 18:16:14.715598   38063 logs.go:282] 0 containers: []
	W1003 18:16:14.715619   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:14.715623   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:14.715677   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:14.740173   38063 cri.go:89] found id: ""
	I1003 18:16:14.740190   38063 logs.go:282] 0 containers: []
	W1003 18:16:14.740198   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:14.740202   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:14.740247   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:14.764574   38063 cri.go:89] found id: ""
	I1003 18:16:14.764589   38063 logs.go:282] 0 containers: []
	W1003 18:16:14.764595   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:14.764599   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:14.764653   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:14.788993   38063 cri.go:89] found id: ""
	I1003 18:16:14.789007   38063 logs.go:282] 0 containers: []
	W1003 18:16:14.789014   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:14.789018   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:14.789059   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:14.813679   38063 cri.go:89] found id: ""
	I1003 18:16:14.813692   38063 logs.go:282] 0 containers: []
	W1003 18:16:14.813699   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:14.813706   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:14.813715   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:14.840363   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:14.840378   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:14.906264   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:14.906280   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:14.917237   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:14.917251   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:14.971230   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:14.964471    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:14.965000    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:14.966522    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:14.966918    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:14.968491    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:14.964471    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:14.965000    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:14.966522    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:14.966918    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:14.968491    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:14.971246   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:14.971257   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:17.534133   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:17.544453   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:17.544502   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:17.568816   38063 cri.go:89] found id: ""
	I1003 18:16:17.568834   38063 logs.go:282] 0 containers: []
	W1003 18:16:17.568841   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:17.568847   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:17.568899   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:17.593442   38063 cri.go:89] found id: ""
	I1003 18:16:17.593460   38063 logs.go:282] 0 containers: []
	W1003 18:16:17.593466   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:17.593472   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:17.593515   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:17.617737   38063 cri.go:89] found id: ""
	I1003 18:16:17.617754   38063 logs.go:282] 0 containers: []
	W1003 18:16:17.617761   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:17.617766   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:17.617804   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:17.642180   38063 cri.go:89] found id: ""
	I1003 18:16:17.642194   38063 logs.go:282] 0 containers: []
	W1003 18:16:17.642201   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:17.642206   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:17.642250   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:17.666189   38063 cri.go:89] found id: ""
	I1003 18:16:17.666204   38063 logs.go:282] 0 containers: []
	W1003 18:16:17.666210   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:17.666214   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:17.666259   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:17.689273   38063 cri.go:89] found id: ""
	I1003 18:16:17.689289   38063 logs.go:282] 0 containers: []
	W1003 18:16:17.689297   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:17.689305   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:17.689345   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:17.714353   38063 cri.go:89] found id: ""
	I1003 18:16:17.714373   38063 logs.go:282] 0 containers: []
	W1003 18:16:17.714381   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:17.714394   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:17.714407   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:17.768746   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:17.762135    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:17.762597    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:17.764136    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:17.764533    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:17.766023    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:17.762135    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:17.762597    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:17.764136    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:17.764533    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:17.766023    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:17.768759   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:17.768768   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:17.830139   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:17.830159   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:17.858326   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:17.858342   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:17.922889   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:17.922911   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:20.435863   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:20.446321   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:20.446361   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:20.471731   38063 cri.go:89] found id: ""
	I1003 18:16:20.471743   38063 logs.go:282] 0 containers: []
	W1003 18:16:20.471749   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:20.471753   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:20.471792   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:20.495730   38063 cri.go:89] found id: ""
	I1003 18:16:20.495747   38063 logs.go:282] 0 containers: []
	W1003 18:16:20.495755   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:20.495760   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:20.495815   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:20.520555   38063 cri.go:89] found id: ""
	I1003 18:16:20.520572   38063 logs.go:282] 0 containers: []
	W1003 18:16:20.520581   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:20.520597   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:20.520650   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:20.545197   38063 cri.go:89] found id: ""
	I1003 18:16:20.545210   38063 logs.go:282] 0 containers: []
	W1003 18:16:20.545216   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:20.545220   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:20.545258   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:20.569113   38063 cri.go:89] found id: ""
	I1003 18:16:20.569126   38063 logs.go:282] 0 containers: []
	W1003 18:16:20.569132   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:20.569138   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:20.569189   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:20.593468   38063 cri.go:89] found id: ""
	I1003 18:16:20.593483   38063 logs.go:282] 0 containers: []
	W1003 18:16:20.593491   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:20.593496   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:20.593545   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:20.617852   38063 cri.go:89] found id: ""
	I1003 18:16:20.617865   38063 logs.go:282] 0 containers: []
	W1003 18:16:20.617872   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:20.617878   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:20.617887   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:20.680360   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:20.680379   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:20.691258   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:20.691271   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:20.745174   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:20.738655    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:20.739179    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:20.740672    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:20.741122    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:20.742610    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:20.738655    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:20.739179    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:20.740672    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:20.741122    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:20.742610    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:20.745187   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:20.745197   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:20.806835   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:20.806853   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:23.335788   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:23.346440   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:23.346505   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:23.371250   38063 cri.go:89] found id: ""
	I1003 18:16:23.371263   38063 logs.go:282] 0 containers: []
	W1003 18:16:23.371269   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:23.371273   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:23.371315   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:23.396570   38063 cri.go:89] found id: ""
	I1003 18:16:23.396585   38063 logs.go:282] 0 containers: []
	W1003 18:16:23.396592   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:23.396596   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:23.396646   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:23.420703   38063 cri.go:89] found id: ""
	I1003 18:16:23.420718   38063 logs.go:282] 0 containers: []
	W1003 18:16:23.420728   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:23.420735   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:23.420783   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:23.445294   38063 cri.go:89] found id: ""
	I1003 18:16:23.445310   38063 logs.go:282] 0 containers: []
	W1003 18:16:23.445319   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:23.445326   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:23.445372   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:23.470082   38063 cri.go:89] found id: ""
	I1003 18:16:23.470100   38063 logs.go:282] 0 containers: []
	W1003 18:16:23.470106   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:23.470110   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:23.470148   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:23.494417   38063 cri.go:89] found id: ""
	I1003 18:16:23.494432   38063 logs.go:282] 0 containers: []
	W1003 18:16:23.494441   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:23.494446   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:23.494489   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:23.519492   38063 cri.go:89] found id: ""
	I1003 18:16:23.519507   38063 logs.go:282] 0 containers: []
	W1003 18:16:23.519516   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:23.519526   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:23.519538   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:23.583328   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:23.583346   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:23.594696   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:23.594710   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:23.649094   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:23.642344    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:23.642882    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:23.644368    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:23.644805    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:23.646275    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:23.642344    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:23.642882    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:23.644368    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:23.644805    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:23.646275    8860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:23.649104   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:23.649113   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:23.710665   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:23.710684   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:26.239439   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:26.250313   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:26.250355   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:26.275460   38063 cri.go:89] found id: ""
	I1003 18:16:26.275476   38063 logs.go:282] 0 containers: []
	W1003 18:16:26.275484   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:26.275490   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:26.275544   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:26.300685   38063 cri.go:89] found id: ""
	I1003 18:16:26.300701   38063 logs.go:282] 0 containers: []
	W1003 18:16:26.300710   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:26.300716   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:26.300760   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:26.324124   38063 cri.go:89] found id: ""
	I1003 18:16:26.324141   38063 logs.go:282] 0 containers: []
	W1003 18:16:26.324150   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:26.324156   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:26.324203   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:26.349331   38063 cri.go:89] found id: ""
	I1003 18:16:26.349348   38063 logs.go:282] 0 containers: []
	W1003 18:16:26.349357   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:26.349363   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:26.349407   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:26.373924   38063 cri.go:89] found id: ""
	I1003 18:16:26.373938   38063 logs.go:282] 0 containers: []
	W1003 18:16:26.373944   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:26.373948   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:26.374020   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:26.398561   38063 cri.go:89] found id: ""
	I1003 18:16:26.398575   38063 logs.go:282] 0 containers: []
	W1003 18:16:26.398581   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:26.398593   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:26.398637   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:26.423043   38063 cri.go:89] found id: ""
	I1003 18:16:26.423055   38063 logs.go:282] 0 containers: []
	W1003 18:16:26.423064   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:26.423073   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:26.423085   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:26.448940   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:26.448957   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:26.514345   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:26.514362   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:26.525206   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:26.525218   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:26.579573   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:26.572848    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:26.573316    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:26.574821    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:26.575280    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:26.576738    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:26.572848    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:26.573316    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:26.574821    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:26.575280    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:26.576738    8996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:26.579590   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:26.579599   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:29.139399   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:29.149491   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:29.149546   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:29.174745   38063 cri.go:89] found id: ""
	I1003 18:16:29.174759   38063 logs.go:282] 0 containers: []
	W1003 18:16:29.174764   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:29.174769   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:29.174809   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:29.199728   38063 cri.go:89] found id: ""
	I1003 18:16:29.199741   38063 logs.go:282] 0 containers: []
	W1003 18:16:29.199747   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:29.199752   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:29.199803   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:29.225114   38063 cri.go:89] found id: ""
	I1003 18:16:29.225130   38063 logs.go:282] 0 containers: []
	W1003 18:16:29.225139   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:29.225145   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:29.225208   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:29.249942   38063 cri.go:89] found id: ""
	I1003 18:16:29.249959   38063 logs.go:282] 0 containers: []
	W1003 18:16:29.249968   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:29.249990   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:29.250054   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:29.274658   38063 cri.go:89] found id: ""
	I1003 18:16:29.274676   38063 logs.go:282] 0 containers: []
	W1003 18:16:29.274684   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:29.274690   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:29.274740   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:29.299132   38063 cri.go:89] found id: ""
	I1003 18:16:29.299147   38063 logs.go:282] 0 containers: []
	W1003 18:16:29.299153   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:29.299159   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:29.299207   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:29.323399   38063 cri.go:89] found id: ""
	I1003 18:16:29.323414   38063 logs.go:282] 0 containers: []
	W1003 18:16:29.323420   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:29.323427   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:29.323436   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:29.388896   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:29.388919   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:29.400252   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:29.400267   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:29.453553   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:29.447303    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:29.447746    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:29.449289    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:29.449640    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:29.451133    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:29.447303    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:29.447746    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:29.449289    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:29.449640    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:29.451133    9105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:29.453604   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:29.453615   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:29.515234   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:29.515257   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:32.045106   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:32.055516   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:32.055563   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:32.081412   38063 cri.go:89] found id: ""
	I1003 18:16:32.081425   38063 logs.go:282] 0 containers: []
	W1003 18:16:32.081431   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:32.081436   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:32.081476   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:32.106569   38063 cri.go:89] found id: ""
	I1003 18:16:32.106585   38063 logs.go:282] 0 containers: []
	W1003 18:16:32.106591   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:32.106595   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:32.106634   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:32.131668   38063 cri.go:89] found id: ""
	I1003 18:16:32.131684   38063 logs.go:282] 0 containers: []
	W1003 18:16:32.131692   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:32.131699   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:32.131745   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:32.156465   38063 cri.go:89] found id: ""
	I1003 18:16:32.156479   38063 logs.go:282] 0 containers: []
	W1003 18:16:32.156485   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:32.156490   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:32.156566   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:32.181247   38063 cri.go:89] found id: ""
	I1003 18:16:32.181260   38063 logs.go:282] 0 containers: []
	W1003 18:16:32.181267   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:32.181271   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:32.181314   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:32.205219   38063 cri.go:89] found id: ""
	I1003 18:16:32.205236   38063 logs.go:282] 0 containers: []
	W1003 18:16:32.205245   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:32.205252   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:32.205305   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:32.229751   38063 cri.go:89] found id: ""
	I1003 18:16:32.229767   38063 logs.go:282] 0 containers: []
	W1003 18:16:32.229776   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:32.229785   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:32.229797   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:32.257251   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:32.257266   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:32.325308   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:32.325326   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:32.336569   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:32.336584   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:32.391680   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:32.384542    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:32.385163    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:32.386741    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:32.387204    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:32.388820    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:32.384542    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:32.385163    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:32.386741    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:32.387204    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:32.388820    9251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:32.391693   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:32.391706   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:34.954303   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:34.965018   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:34.965070   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:34.990955   38063 cri.go:89] found id: ""
	I1003 18:16:34.990970   38063 logs.go:282] 0 containers: []
	W1003 18:16:34.990992   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:34.990999   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:34.991061   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:35.015676   38063 cri.go:89] found id: ""
	I1003 18:16:35.015689   38063 logs.go:282] 0 containers: []
	W1003 18:16:35.015695   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:35.015699   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:35.015737   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:35.040155   38063 cri.go:89] found id: ""
	I1003 18:16:35.040168   38063 logs.go:282] 0 containers: []
	W1003 18:16:35.040174   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:35.040179   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:35.040218   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:35.065569   38063 cri.go:89] found id: ""
	I1003 18:16:35.065587   38063 logs.go:282] 0 containers: []
	W1003 18:16:35.065596   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:35.065602   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:35.065663   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:35.090276   38063 cri.go:89] found id: ""
	I1003 18:16:35.090288   38063 logs.go:282] 0 containers: []
	W1003 18:16:35.090295   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:35.090299   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:35.090339   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:35.114581   38063 cri.go:89] found id: ""
	I1003 18:16:35.114617   38063 logs.go:282] 0 containers: []
	W1003 18:16:35.114627   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:35.114633   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:35.114688   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:35.139719   38063 cri.go:89] found id: ""
	I1003 18:16:35.139734   38063 logs.go:282] 0 containers: []
	W1003 18:16:35.139744   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:35.139753   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:35.139766   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:35.205015   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:35.205034   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:35.216021   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:35.216039   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:35.269655   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:35.262830    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:35.263341    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:35.264897    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:35.265346    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:35.266885    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:35.262830    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:35.263341    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:35.264897    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:35.265346    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:35.266885    9359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:35.269664   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:35.269674   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:35.330604   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:35.330634   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:37.861503   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:37.871534   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:37.871641   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:37.895946   38063 cri.go:89] found id: ""
	I1003 18:16:37.895961   38063 logs.go:282] 0 containers: []
	W1003 18:16:37.895971   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:37.895995   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:37.896048   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:37.921286   38063 cri.go:89] found id: ""
	I1003 18:16:37.921301   38063 logs.go:282] 0 containers: []
	W1003 18:16:37.921308   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:37.921314   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:37.921364   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:37.946115   38063 cri.go:89] found id: ""
	I1003 18:16:37.946131   38063 logs.go:282] 0 containers: []
	W1003 18:16:37.946141   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:37.946148   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:37.946194   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:37.970857   38063 cri.go:89] found id: ""
	I1003 18:16:37.970871   38063 logs.go:282] 0 containers: []
	W1003 18:16:37.970878   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:37.970882   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:37.970930   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:37.997387   38063 cri.go:89] found id: ""
	I1003 18:16:37.997405   38063 logs.go:282] 0 containers: []
	W1003 18:16:37.997412   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:37.997416   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:37.997459   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:38.022848   38063 cri.go:89] found id: ""
	I1003 18:16:38.022862   38063 logs.go:282] 0 containers: []
	W1003 18:16:38.022869   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:38.022874   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:38.022938   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:38.048588   38063 cri.go:89] found id: ""
	I1003 18:16:38.048624   38063 logs.go:282] 0 containers: []
	W1003 18:16:38.048632   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:38.048640   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:38.048653   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:38.110031   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:38.110050   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:38.137498   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:38.137513   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:38.203958   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:38.203994   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:38.215727   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:38.215744   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:38.269765   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:38.263066    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:38.263531    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:38.265220    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:38.265597    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:38.267129    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:38.263066    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:38.263531    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:38.265220    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:38.265597    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:38.267129    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:40.770413   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:40.780831   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:40.780874   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:40.804826   38063 cri.go:89] found id: ""
	I1003 18:16:40.804839   38063 logs.go:282] 0 containers: []
	W1003 18:16:40.804845   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:40.804850   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:40.804890   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:40.830833   38063 cri.go:89] found id: ""
	I1003 18:16:40.830850   38063 logs.go:282] 0 containers: []
	W1003 18:16:40.830858   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:40.830864   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:40.830930   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:40.856650   38063 cri.go:89] found id: ""
	I1003 18:16:40.856669   38063 logs.go:282] 0 containers: []
	W1003 18:16:40.856677   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:40.856693   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:40.856748   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:40.881236   38063 cri.go:89] found id: ""
	I1003 18:16:40.881250   38063 logs.go:282] 0 containers: []
	W1003 18:16:40.881256   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:40.881261   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:40.881301   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:40.905820   38063 cri.go:89] found id: ""
	I1003 18:16:40.905836   38063 logs.go:282] 0 containers: []
	W1003 18:16:40.905843   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:40.905849   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:40.905900   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:40.931504   38063 cri.go:89] found id: ""
	I1003 18:16:40.931520   38063 logs.go:282] 0 containers: []
	W1003 18:16:40.931527   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:40.931532   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:40.931583   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:40.957539   38063 cri.go:89] found id: ""
	I1003 18:16:40.957553   38063 logs.go:282] 0 containers: []
	W1003 18:16:40.957560   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:40.957567   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:40.957578   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:41.015948   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:41.015969   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:41.044701   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:41.044726   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:41.112388   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:41.112406   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:41.123384   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:41.123399   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:41.177789   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:41.171080    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:41.171701    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:41.173280    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:41.173749    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:41.175246    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:41.171080    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:41.171701    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:41.173280    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:41.173749    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:41.175246    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:43.679496   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:43.689800   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:43.689843   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:43.714130   38063 cri.go:89] found id: ""
	I1003 18:16:43.714145   38063 logs.go:282] 0 containers: []
	W1003 18:16:43.714152   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:43.714156   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:43.714197   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:43.738900   38063 cri.go:89] found id: ""
	I1003 18:16:43.738916   38063 logs.go:282] 0 containers: []
	W1003 18:16:43.738924   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:43.738929   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:43.738972   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:43.763822   38063 cri.go:89] found id: ""
	I1003 18:16:43.763835   38063 logs.go:282] 0 containers: []
	W1003 18:16:43.763841   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:43.763845   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:43.763884   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:43.789103   38063 cri.go:89] found id: ""
	I1003 18:16:43.789120   38063 logs.go:282] 0 containers: []
	W1003 18:16:43.789128   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:43.789134   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:43.789187   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:43.813436   38063 cri.go:89] found id: ""
	I1003 18:16:43.813447   38063 logs.go:282] 0 containers: []
	W1003 18:16:43.813455   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:43.813460   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:43.813513   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:43.838306   38063 cri.go:89] found id: ""
	I1003 18:16:43.838322   38063 logs.go:282] 0 containers: []
	W1003 18:16:43.838331   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:43.838338   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:43.838382   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:43.863413   38063 cri.go:89] found id: ""
	I1003 18:16:43.863429   38063 logs.go:282] 0 containers: []
	W1003 18:16:43.863435   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:43.863442   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:43.863451   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:43.931299   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:43.931317   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:43.942307   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:43.942321   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:43.997476   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:43.990626    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:43.991191    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:43.992711    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:43.993154    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:43.994633    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:43.990626    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:43.991191    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:43.992711    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:43.993154    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:43.994633    9727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:43.997488   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:43.997500   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:44.053446   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:44.053464   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:46.583423   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:46.593663   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:46.593719   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:46.618188   38063 cri.go:89] found id: ""
	I1003 18:16:46.618202   38063 logs.go:282] 0 containers: []
	W1003 18:16:46.618208   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:46.618213   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:46.618250   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:46.642929   38063 cri.go:89] found id: ""
	I1003 18:16:46.642943   38063 logs.go:282] 0 containers: []
	W1003 18:16:46.642949   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:46.642954   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:46.643015   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:46.667745   38063 cri.go:89] found id: ""
	I1003 18:16:46.667761   38063 logs.go:282] 0 containers: []
	W1003 18:16:46.667770   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:46.667775   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:46.667818   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:46.692080   38063 cri.go:89] found id: ""
	I1003 18:16:46.692092   38063 logs.go:282] 0 containers: []
	W1003 18:16:46.692098   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:46.692102   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:46.692140   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:46.716789   38063 cri.go:89] found id: ""
	I1003 18:16:46.716807   38063 logs.go:282] 0 containers: []
	W1003 18:16:46.716816   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:46.716822   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:46.716867   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:46.741361   38063 cri.go:89] found id: ""
	I1003 18:16:46.741375   38063 logs.go:282] 0 containers: []
	W1003 18:16:46.741382   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:46.741389   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:46.741437   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:46.765330   38063 cri.go:89] found id: ""
	I1003 18:16:46.765343   38063 logs.go:282] 0 containers: []
	W1003 18:16:46.765349   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:46.765357   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:46.765368   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:46.830366   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:46.830385   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:46.841266   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:46.841279   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:46.894396   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:46.888072    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:46.888542    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:46.890079    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:46.890459    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:46.891950    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:46.888072    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:46.888542    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:46.890079    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:46.890459    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:46.891950    9852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:46.894415   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:46.894426   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:46.954277   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:46.954295   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:49.482413   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:49.492881   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:49.492921   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:49.516075   38063 cri.go:89] found id: ""
	I1003 18:16:49.516093   38063 logs.go:282] 0 containers: []
	W1003 18:16:49.516102   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:49.516108   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:49.516154   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:49.542911   38063 cri.go:89] found id: ""
	I1003 18:16:49.542928   38063 logs.go:282] 0 containers: []
	W1003 18:16:49.542936   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:49.542940   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:49.543006   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:49.568965   38063 cri.go:89] found id: ""
	I1003 18:16:49.568996   38063 logs.go:282] 0 containers: []
	W1003 18:16:49.569005   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:49.569009   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:49.569055   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:49.593221   38063 cri.go:89] found id: ""
	I1003 18:16:49.593238   38063 logs.go:282] 0 containers: []
	W1003 18:16:49.593246   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:49.593251   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:49.593302   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:49.618807   38063 cri.go:89] found id: ""
	I1003 18:16:49.618824   38063 logs.go:282] 0 containers: []
	W1003 18:16:49.618831   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:49.618848   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:49.618893   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:49.642342   38063 cri.go:89] found id: ""
	I1003 18:16:49.642357   38063 logs.go:282] 0 containers: []
	W1003 18:16:49.642363   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:49.642368   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:49.642407   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:49.666474   38063 cri.go:89] found id: ""
	I1003 18:16:49.666488   38063 logs.go:282] 0 containers: []
	W1003 18:16:49.666494   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:49.666502   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:49.666513   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:49.722457   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:49.722476   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:49.750153   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:49.750170   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:49.814369   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:49.814387   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:49.825405   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:49.825418   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:49.879924   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:49.873380    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:49.873871    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:49.875556    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:49.876003    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:49.877459    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:49.873380    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:49.873871    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:49.875556    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:49.876003    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:49.877459    9987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:52.380662   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:52.391022   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:52.391066   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:52.414399   38063 cri.go:89] found id: ""
	I1003 18:16:52.414416   38063 logs.go:282] 0 containers: []
	W1003 18:16:52.414423   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:52.414428   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:52.414466   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:52.438285   38063 cri.go:89] found id: ""
	I1003 18:16:52.438301   38063 logs.go:282] 0 containers: []
	W1003 18:16:52.438308   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:52.438312   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:52.438352   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:52.463204   38063 cri.go:89] found id: ""
	I1003 18:16:52.463218   38063 logs.go:282] 0 containers: []
	W1003 18:16:52.463224   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:52.463229   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:52.463271   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:52.487579   38063 cri.go:89] found id: ""
	I1003 18:16:52.487593   38063 logs.go:282] 0 containers: []
	W1003 18:16:52.487598   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:52.487605   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:52.487658   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:52.512643   38063 cri.go:89] found id: ""
	I1003 18:16:52.512657   38063 logs.go:282] 0 containers: []
	W1003 18:16:52.512663   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:52.512667   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:52.512705   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:52.538897   38063 cri.go:89] found id: ""
	I1003 18:16:52.538913   38063 logs.go:282] 0 containers: []
	W1003 18:16:52.538920   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:52.538926   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:52.538970   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:52.563277   38063 cri.go:89] found id: ""
	I1003 18:16:52.563294   38063 logs.go:282] 0 containers: []
	W1003 18:16:52.563302   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:52.563310   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:52.563321   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:52.622624   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:52.622642   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:52.650058   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:52.650074   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:52.714242   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:52.714261   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:52.725305   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:52.725319   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:52.777801   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:52.771320   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:52.772111   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:52.773166   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:52.773579   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:52.775090   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:52.771320   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:52.772111   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:52.773166   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:52.773579   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:52.775090   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:55.279440   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:55.290117   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:55.290161   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:55.315904   38063 cri.go:89] found id: ""
	I1003 18:16:55.315920   38063 logs.go:282] 0 containers: []
	W1003 18:16:55.315926   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:55.315930   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:55.315996   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:55.340568   38063 cri.go:89] found id: ""
	I1003 18:16:55.340582   38063 logs.go:282] 0 containers: []
	W1003 18:16:55.340588   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:55.340593   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:55.340631   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:55.365911   38063 cri.go:89] found id: ""
	I1003 18:16:55.365927   38063 logs.go:282] 0 containers: []
	W1003 18:16:55.365937   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:55.365943   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:55.366003   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:55.390838   38063 cri.go:89] found id: ""
	I1003 18:16:55.390855   38063 logs.go:282] 0 containers: []
	W1003 18:16:55.390864   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:55.390870   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:55.390924   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:55.414625   38063 cri.go:89] found id: ""
	I1003 18:16:55.414638   38063 logs.go:282] 0 containers: []
	W1003 18:16:55.414651   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:55.414657   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:55.414712   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:55.438460   38063 cri.go:89] found id: ""
	I1003 18:16:55.438474   38063 logs.go:282] 0 containers: []
	W1003 18:16:55.438480   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:55.438484   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:55.438522   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:55.463131   38063 cri.go:89] found id: ""
	I1003 18:16:55.463148   38063 logs.go:282] 0 containers: []
	W1003 18:16:55.463156   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:55.463165   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:55.463176   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:55.516949   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:55.510276   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:55.510824   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:55.512379   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:55.512767   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:55.514262   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:55.510276   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:55.510824   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:55.512379   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:55.512767   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:55.514262   10211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:16:55.516958   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:55.516968   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:55.573992   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:55.574010   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:55.601928   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:55.601944   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:55.667452   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:55.667470   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:58.180268   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:16:58.190896   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:16:58.190942   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:16:58.215802   38063 cri.go:89] found id: ""
	I1003 18:16:58.215820   38063 logs.go:282] 0 containers: []
	W1003 18:16:58.215828   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:16:58.215835   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:16:58.215885   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:16:58.240607   38063 cri.go:89] found id: ""
	I1003 18:16:58.240623   38063 logs.go:282] 0 containers: []
	W1003 18:16:58.240632   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:16:58.240638   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:16:58.240719   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:16:58.264676   38063 cri.go:89] found id: ""
	I1003 18:16:58.264689   38063 logs.go:282] 0 containers: []
	W1003 18:16:58.264696   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:16:58.264703   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:16:58.264742   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:16:58.289482   38063 cri.go:89] found id: ""
	I1003 18:16:58.289496   38063 logs.go:282] 0 containers: []
	W1003 18:16:58.289502   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:16:58.289507   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:16:58.289558   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:16:58.314683   38063 cri.go:89] found id: ""
	I1003 18:16:58.314699   38063 logs.go:282] 0 containers: []
	W1003 18:16:58.314708   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:16:58.314714   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:16:58.314763   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:16:58.340874   38063 cri.go:89] found id: ""
	I1003 18:16:58.340900   38063 logs.go:282] 0 containers: []
	W1003 18:16:58.340910   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:16:58.340918   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:16:58.340989   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:16:58.365744   38063 cri.go:89] found id: ""
	I1003 18:16:58.365765   38063 logs.go:282] 0 containers: []
	W1003 18:16:58.365774   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:16:58.365785   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:16:58.365798   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:16:58.424919   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:16:58.424938   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:16:58.452107   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:16:58.452122   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:16:58.516078   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:16:58.516098   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:16:58.527186   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:16:58.527200   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:16:58.581397   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:16:58.574853   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:58.575363   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:58.576868   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:58.577319   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:58.578848   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:16:58.574853   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:58.575363   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:58.576868   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:58.577319   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:16:58.578848   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:01.083146   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:01.093268   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:01.093310   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:01.117816   38063 cri.go:89] found id: ""
	I1003 18:17:01.117833   38063 logs.go:282] 0 containers: []
	W1003 18:17:01.117840   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:01.117844   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:01.117882   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:01.141987   38063 cri.go:89] found id: ""
	I1003 18:17:01.142004   38063 logs.go:282] 0 containers: []
	W1003 18:17:01.142012   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:01.142018   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:01.142057   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:01.165255   38063 cri.go:89] found id: ""
	I1003 18:17:01.165271   38063 logs.go:282] 0 containers: []
	W1003 18:17:01.165277   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:01.165282   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:01.165323   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:01.189244   38063 cri.go:89] found id: ""
	I1003 18:17:01.189257   38063 logs.go:282] 0 containers: []
	W1003 18:17:01.189264   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:01.189269   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:01.189310   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:01.213365   38063 cri.go:89] found id: ""
	I1003 18:17:01.213381   38063 logs.go:282] 0 containers: []
	W1003 18:17:01.213388   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:01.213395   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:01.213442   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:01.240957   38063 cri.go:89] found id: ""
	I1003 18:17:01.240972   38063 logs.go:282] 0 containers: []
	W1003 18:17:01.241000   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:01.241007   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:01.241051   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:01.267290   38063 cri.go:89] found id: ""
	I1003 18:17:01.267306   38063 logs.go:282] 0 containers: []
	W1003 18:17:01.267312   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:01.267320   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:01.267331   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:01.295273   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:01.295290   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:01.364816   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:01.364836   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:01.376420   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:01.376437   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:01.432587   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:01.425391   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:01.425950   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:01.427491   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:01.428036   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:01.429594   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:01.425391   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:01.425950   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:01.427491   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:01.428036   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:01.429594   10487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:01.432599   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:01.432613   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:03.992551   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:04.002736   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:04.002789   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:04.027153   38063 cri.go:89] found id: ""
	I1003 18:17:04.027169   38063 logs.go:282] 0 containers: []
	W1003 18:17:04.027177   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:04.027183   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:04.027240   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:04.052384   38063 cri.go:89] found id: ""
	I1003 18:17:04.052399   38063 logs.go:282] 0 containers: []
	W1003 18:17:04.052406   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:04.052411   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:04.052458   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:04.077210   38063 cri.go:89] found id: ""
	I1003 18:17:04.077225   38063 logs.go:282] 0 containers: []
	W1003 18:17:04.077233   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:04.077243   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:04.077298   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:04.102192   38063 cri.go:89] found id: ""
	I1003 18:17:04.102208   38063 logs.go:282] 0 containers: []
	W1003 18:17:04.102217   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:04.102223   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:04.102266   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:04.126632   38063 cri.go:89] found id: ""
	I1003 18:17:04.126647   38063 logs.go:282] 0 containers: []
	W1003 18:17:04.126653   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:04.126658   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:04.126700   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:04.152736   38063 cri.go:89] found id: ""
	I1003 18:17:04.152752   38063 logs.go:282] 0 containers: []
	W1003 18:17:04.152761   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:04.152768   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:04.152814   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:04.177062   38063 cri.go:89] found id: ""
	I1003 18:17:04.177080   38063 logs.go:282] 0 containers: []
	W1003 18:17:04.177089   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:04.177099   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:04.177112   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:04.188211   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:04.188225   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:04.242641   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:04.235414   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:04.235943   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:04.237902   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:04.238634   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:04.240168   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:04.235414   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:04.235943   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:04.237902   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:04.238634   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:04.240168   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:04.242649   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:04.242661   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:04.302342   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:04.302368   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:04.330691   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:04.330717   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:06.899448   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:06.909768   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:06.909813   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:06.934090   38063 cri.go:89] found id: ""
	I1003 18:17:06.934103   38063 logs.go:282] 0 containers: []
	W1003 18:17:06.934109   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:06.934114   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:06.934152   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:06.958320   38063 cri.go:89] found id: ""
	I1003 18:17:06.958334   38063 logs.go:282] 0 containers: []
	W1003 18:17:06.958340   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:06.958343   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:06.958381   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:06.984766   38063 cri.go:89] found id: ""
	I1003 18:17:06.984783   38063 logs.go:282] 0 containers: []
	W1003 18:17:06.984792   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:06.984797   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:06.984857   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:07.011801   38063 cri.go:89] found id: ""
	I1003 18:17:07.011818   38063 logs.go:282] 0 containers: []
	W1003 18:17:07.011827   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:07.011832   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:07.011871   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:07.036323   38063 cri.go:89] found id: ""
	I1003 18:17:07.036339   38063 logs.go:282] 0 containers: []
	W1003 18:17:07.036347   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:07.036352   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:07.036402   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:07.061101   38063 cri.go:89] found id: ""
	I1003 18:17:07.061117   38063 logs.go:282] 0 containers: []
	W1003 18:17:07.061126   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:07.061134   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:07.061184   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:07.085274   38063 cri.go:89] found id: ""
	I1003 18:17:07.085286   38063 logs.go:282] 0 containers: []
	W1003 18:17:07.085293   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:07.085300   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:07.085309   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:07.146317   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:07.146334   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:07.175088   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:07.175102   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:07.243716   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:07.243735   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:07.255174   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:07.255190   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:07.308657   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:07.302083   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:07.302582   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:07.304157   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:07.304555   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:07.306037   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:07.302083   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:07.302582   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:07.304157   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:07.304555   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:07.306037   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:09.809372   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:09.819499   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:09.819542   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:09.844409   38063 cri.go:89] found id: ""
	I1003 18:17:09.844423   38063 logs.go:282] 0 containers: []
	W1003 18:17:09.844435   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:09.844439   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:09.844478   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:09.868767   38063 cri.go:89] found id: ""
	I1003 18:17:09.868781   38063 logs.go:282] 0 containers: []
	W1003 18:17:09.868787   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:09.868791   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:09.868832   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:09.891798   38063 cri.go:89] found id: ""
	I1003 18:17:09.891810   38063 logs.go:282] 0 containers: []
	W1003 18:17:09.891817   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:09.891821   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:09.891858   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:09.917378   38063 cri.go:89] found id: ""
	I1003 18:17:09.917393   38063 logs.go:282] 0 containers: []
	W1003 18:17:09.917399   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:09.917405   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:09.917450   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:09.942686   38063 cri.go:89] found id: ""
	I1003 18:17:09.942699   38063 logs.go:282] 0 containers: []
	W1003 18:17:09.942705   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:09.942710   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:09.942750   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:09.966104   38063 cri.go:89] found id: ""
	I1003 18:17:09.966117   38063 logs.go:282] 0 containers: []
	W1003 18:17:09.966123   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:09.966128   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:09.966166   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:09.993525   38063 cri.go:89] found id: ""
	I1003 18:17:09.993538   38063 logs.go:282] 0 containers: []
	W1003 18:17:09.993544   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:09.993551   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:09.993560   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:10.062246   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:10.062265   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:10.074081   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:10.074098   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:10.128788   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:10.122249   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:10.122773   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:10.124287   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:10.124702   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:10.126163   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:10.122249   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:10.122773   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:10.124287   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:10.124702   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:10.126163   10850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:10.128809   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:10.128820   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:10.186632   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:10.186649   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:12.716320   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:12.726641   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:12.726693   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:12.750384   38063 cri.go:89] found id: ""
	I1003 18:17:12.750397   38063 logs.go:282] 0 containers: []
	W1003 18:17:12.750403   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:12.750407   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:12.750446   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:12.775313   38063 cri.go:89] found id: ""
	I1003 18:17:12.775330   38063 logs.go:282] 0 containers: []
	W1003 18:17:12.775338   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:12.775344   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:12.775384   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:12.800228   38063 cri.go:89] found id: ""
	I1003 18:17:12.800244   38063 logs.go:282] 0 containers: []
	W1003 18:17:12.800251   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:12.800256   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:12.800298   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:12.825275   38063 cri.go:89] found id: ""
	I1003 18:17:12.825291   38063 logs.go:282] 0 containers: []
	W1003 18:17:12.825300   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:12.825317   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:12.825372   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:12.849255   38063 cri.go:89] found id: ""
	I1003 18:17:12.849271   38063 logs.go:282] 0 containers: []
	W1003 18:17:12.849279   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:12.849285   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:12.849336   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:12.873407   38063 cri.go:89] found id: ""
	I1003 18:17:12.873421   38063 logs.go:282] 0 containers: []
	W1003 18:17:12.873427   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:12.873431   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:12.873482   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:12.896762   38063 cri.go:89] found id: ""
	I1003 18:17:12.896778   38063 logs.go:282] 0 containers: []
	W1003 18:17:12.896786   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:12.896795   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:12.896807   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:12.960955   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:12.960983   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:12.972163   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:12.972178   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:13.025479   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:13.018959   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:13.019441   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:13.020904   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:13.021379   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:13.022868   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:13.018959   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:13.019441   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:13.020904   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:13.021379   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:13.022868   10964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:13.025493   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:13.025506   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:13.086473   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:13.086491   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:15.616095   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:15.626385   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:15.626428   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:15.650771   38063 cri.go:89] found id: ""
	I1003 18:17:15.650785   38063 logs.go:282] 0 containers: []
	W1003 18:17:15.650792   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:15.650796   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:15.650837   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:15.675587   38063 cri.go:89] found id: ""
	I1003 18:17:15.675629   38063 logs.go:282] 0 containers: []
	W1003 18:17:15.675637   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:15.675643   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:15.675705   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:15.699653   38063 cri.go:89] found id: ""
	I1003 18:17:15.699667   38063 logs.go:282] 0 containers: []
	W1003 18:17:15.699673   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:15.699677   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:15.699716   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:15.724414   38063 cri.go:89] found id: ""
	I1003 18:17:15.724427   38063 logs.go:282] 0 containers: []
	W1003 18:17:15.724435   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:15.724441   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:15.724496   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:15.749056   38063 cri.go:89] found id: ""
	I1003 18:17:15.749069   38063 logs.go:282] 0 containers: []
	W1003 18:17:15.749077   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:15.749082   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:15.749123   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:15.773830   38063 cri.go:89] found id: ""
	I1003 18:17:15.773846   38063 logs.go:282] 0 containers: []
	W1003 18:17:15.773859   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:15.773864   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:15.773907   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:15.798104   38063 cri.go:89] found id: ""
	I1003 18:17:15.798120   38063 logs.go:282] 0 containers: []
	W1003 18:17:15.798126   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:15.798133   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:15.798143   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:15.851960   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:15.845372   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:15.845936   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:15.847479   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:15.847794   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:15.849288   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:15.845372   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:15.845936   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:15.847479   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:15.847794   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:15.849288   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:15.851990   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:15.852005   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:15.909042   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:15.909059   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:15.936198   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:15.936212   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:16.001546   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:16.001563   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:18.514268   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:18.524824   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:18.524867   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:18.549240   38063 cri.go:89] found id: ""
	I1003 18:17:18.549252   38063 logs.go:282] 0 containers: []
	W1003 18:17:18.549259   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:18.549263   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:18.549304   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:18.573832   38063 cri.go:89] found id: ""
	I1003 18:17:18.573846   38063 logs.go:282] 0 containers: []
	W1003 18:17:18.573851   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:18.573855   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:18.573893   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:18.600015   38063 cri.go:89] found id: ""
	I1003 18:17:18.600030   38063 logs.go:282] 0 containers: []
	W1003 18:17:18.600038   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:18.600042   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:18.600092   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:18.624175   38063 cri.go:89] found id: ""
	I1003 18:17:18.624187   38063 logs.go:282] 0 containers: []
	W1003 18:17:18.624193   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:18.624197   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:18.624235   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:18.647489   38063 cri.go:89] found id: ""
	I1003 18:17:18.647506   38063 logs.go:282] 0 containers: []
	W1003 18:17:18.647515   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:18.647521   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:18.647563   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:18.671643   38063 cri.go:89] found id: ""
	I1003 18:17:18.671657   38063 logs.go:282] 0 containers: []
	W1003 18:17:18.671663   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:18.671668   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:18.671706   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:18.696078   38063 cri.go:89] found id: ""
	I1003 18:17:18.696092   38063 logs.go:282] 0 containers: []
	W1003 18:17:18.696098   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:18.696105   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:18.696121   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:18.753226   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:18.753245   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:18.780990   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:18.781068   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:18.847947   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:18.847966   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:18.859021   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:18.859037   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:18.912345   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:18.905516   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:18.906367   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:18.907929   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:18.908373   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:18.909849   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:18.905516   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:18.906367   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:18.907929   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:18.908373   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:18.909849   11225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:21.414030   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:21.425003   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:21.425051   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:21.450060   38063 cri.go:89] found id: ""
	I1003 18:17:21.450073   38063 logs.go:282] 0 containers: []
	W1003 18:17:21.450080   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:21.450085   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:21.450124   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:21.474474   38063 cri.go:89] found id: ""
	I1003 18:17:21.474488   38063 logs.go:282] 0 containers: []
	W1003 18:17:21.474494   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:21.474499   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:21.474539   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:21.498126   38063 cri.go:89] found id: ""
	I1003 18:17:21.498142   38063 logs.go:282] 0 containers: []
	W1003 18:17:21.498149   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:21.498154   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:21.498203   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:21.523905   38063 cri.go:89] found id: ""
	I1003 18:17:21.523923   38063 logs.go:282] 0 containers: []
	W1003 18:17:21.523932   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:21.523938   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:21.524008   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:21.548187   38063 cri.go:89] found id: ""
	I1003 18:17:21.548201   38063 logs.go:282] 0 containers: []
	W1003 18:17:21.548207   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:21.548211   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:21.548252   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:21.572667   38063 cri.go:89] found id: ""
	I1003 18:17:21.572680   38063 logs.go:282] 0 containers: []
	W1003 18:17:21.572686   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:21.572692   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:21.572736   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:21.597807   38063 cri.go:89] found id: ""
	I1003 18:17:21.597824   38063 logs.go:282] 0 containers: []
	W1003 18:17:21.597832   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:21.597839   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:21.597848   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:21.652152   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:21.645230   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:21.645729   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:21.647282   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:21.647701   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:21.649188   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:21.645230   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:21.645729   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:21.647282   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:21.647701   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:21.649188   11331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:21.652166   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:21.652179   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:21.713448   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:21.713465   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:21.742437   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:21.742451   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:21.805537   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:21.805554   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:24.317361   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:24.327608   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:24.327671   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:24.354286   38063 cri.go:89] found id: ""
	I1003 18:17:24.354305   38063 logs.go:282] 0 containers: []
	W1003 18:17:24.354315   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:24.354320   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:24.354379   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:24.378696   38063 cri.go:89] found id: ""
	I1003 18:17:24.378710   38063 logs.go:282] 0 containers: []
	W1003 18:17:24.378718   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:24.378724   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:24.378782   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:24.402575   38063 cri.go:89] found id: ""
	I1003 18:17:24.402589   38063 logs.go:282] 0 containers: []
	W1003 18:17:24.402595   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:24.402600   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:24.402648   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:24.427138   38063 cri.go:89] found id: ""
	I1003 18:17:24.427154   38063 logs.go:282] 0 containers: []
	W1003 18:17:24.427162   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:24.427169   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:24.427211   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:24.451521   38063 cri.go:89] found id: ""
	I1003 18:17:24.451536   38063 logs.go:282] 0 containers: []
	W1003 18:17:24.451543   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:24.451547   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:24.451590   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:24.475930   38063 cri.go:89] found id: ""
	I1003 18:17:24.475943   38063 logs.go:282] 0 containers: []
	W1003 18:17:24.475949   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:24.475954   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:24.476012   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:24.500074   38063 cri.go:89] found id: ""
	I1003 18:17:24.500087   38063 logs.go:282] 0 containers: []
	W1003 18:17:24.500093   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:24.500100   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:24.500109   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:24.566537   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:24.566553   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:24.577539   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:24.577553   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:24.632738   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:24.626123   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:24.626592   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:24.628151   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:24.628571   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:24.630095   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:24.626123   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:24.626592   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:24.628151   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:24.628571   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:24.630095   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:24.632749   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:24.632758   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:24.690610   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:24.690628   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:27.219340   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:27.229548   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:27.229602   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:27.253625   38063 cri.go:89] found id: ""
	I1003 18:17:27.253647   38063 logs.go:282] 0 containers: []
	W1003 18:17:27.253655   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:27.253661   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:27.253712   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:27.277732   38063 cri.go:89] found id: ""
	I1003 18:17:27.277747   38063 logs.go:282] 0 containers: []
	W1003 18:17:27.277756   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:27.277762   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:27.277804   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:27.301627   38063 cri.go:89] found id: ""
	I1003 18:17:27.301641   38063 logs.go:282] 0 containers: []
	W1003 18:17:27.301647   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:27.301652   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:27.301701   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:27.327361   38063 cri.go:89] found id: ""
	I1003 18:17:27.327377   38063 logs.go:282] 0 containers: []
	W1003 18:17:27.327386   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:27.327392   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:27.327455   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:27.351272   38063 cri.go:89] found id: ""
	I1003 18:17:27.351287   38063 logs.go:282] 0 containers: []
	W1003 18:17:27.351296   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:27.351301   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:27.351354   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:27.376015   38063 cri.go:89] found id: ""
	I1003 18:17:27.376028   38063 logs.go:282] 0 containers: []
	W1003 18:17:27.376034   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:27.376039   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:27.376078   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:27.401069   38063 cri.go:89] found id: ""
	I1003 18:17:27.401083   38063 logs.go:282] 0 containers: []
	W1003 18:17:27.401089   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:27.401096   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:27.401106   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:27.461887   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:27.461903   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:27.489794   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:27.489811   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:27.556416   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:27.556437   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:27.567650   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:27.567666   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:27.621254   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:27.614343   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:27.615016   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:27.616631   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:27.617100   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:27.618643   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:27.614343   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:27.615016   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:27.616631   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:27.617100   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:27.618643   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:30.121948   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:30.132195   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:30.132251   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:30.157028   38063 cri.go:89] found id: ""
	I1003 18:17:30.157044   38063 logs.go:282] 0 containers: []
	W1003 18:17:30.157052   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:30.157059   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:30.157114   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:30.181243   38063 cri.go:89] found id: ""
	I1003 18:17:30.181257   38063 logs.go:282] 0 containers: []
	W1003 18:17:30.181267   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:30.181272   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:30.181327   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:30.204956   38063 cri.go:89] found id: ""
	I1003 18:17:30.204969   38063 logs.go:282] 0 containers: []
	W1003 18:17:30.204990   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:30.204996   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:30.205049   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:30.229309   38063 cri.go:89] found id: ""
	I1003 18:17:30.229324   38063 logs.go:282] 0 containers: []
	W1003 18:17:30.229332   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:30.229353   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:30.229404   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:30.253288   38063 cri.go:89] found id: ""
	I1003 18:17:30.253302   38063 logs.go:282] 0 containers: []
	W1003 18:17:30.253308   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:30.253312   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:30.253353   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:30.276885   38063 cri.go:89] found id: ""
	I1003 18:17:30.276900   38063 logs.go:282] 0 containers: []
	W1003 18:17:30.276907   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:30.276912   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:30.276954   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:30.302076   38063 cri.go:89] found id: ""
	I1003 18:17:30.302093   38063 logs.go:282] 0 containers: []
	W1003 18:17:30.302102   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:30.302111   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:30.302122   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:30.355957   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:30.349507   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:30.350118   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:30.351635   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:30.351999   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:30.353476   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:30.349507   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:30.350118   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:30.351635   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:30.351999   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:30.353476   11695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:30.355967   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:30.355997   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:30.416595   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:30.416617   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:30.444417   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:30.444433   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:30.511869   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:30.511888   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:33.023698   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:33.034090   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:33.034130   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:33.058440   38063 cri.go:89] found id: ""
	I1003 18:17:33.058454   38063 logs.go:282] 0 containers: []
	W1003 18:17:33.058463   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:33.058469   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:33.058516   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:33.083214   38063 cri.go:89] found id: ""
	I1003 18:17:33.083227   38063 logs.go:282] 0 containers: []
	W1003 18:17:33.083233   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:33.083238   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:33.083278   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:33.107106   38063 cri.go:89] found id: ""
	I1003 18:17:33.107121   38063 logs.go:282] 0 containers: []
	W1003 18:17:33.107128   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:33.107132   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:33.107177   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:33.132152   38063 cri.go:89] found id: ""
	I1003 18:17:33.132169   38063 logs.go:282] 0 containers: []
	W1003 18:17:33.132178   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:33.132184   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:33.132237   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:33.156458   38063 cri.go:89] found id: ""
	I1003 18:17:33.156475   38063 logs.go:282] 0 containers: []
	W1003 18:17:33.156486   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:33.156492   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:33.156541   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:33.181450   38063 cri.go:89] found id: ""
	I1003 18:17:33.181466   38063 logs.go:282] 0 containers: []
	W1003 18:17:33.181474   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:33.181480   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:33.181520   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:33.204281   38063 cri.go:89] found id: ""
	I1003 18:17:33.204299   38063 logs.go:282] 0 containers: []
	W1003 18:17:33.204307   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:33.204316   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:33.204328   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:33.268843   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:33.268862   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:33.280428   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:33.280444   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:33.333875   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:33.327300   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:33.327741   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:33.329337   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:33.329778   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:33.331336   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:33.327300   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:33.327741   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:33.329337   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:33.329778   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:33.331336   11827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:33.333888   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:33.333899   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:33.395285   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:33.395303   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:35.924723   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:35.935417   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:35.935459   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:35.959423   38063 cri.go:89] found id: ""
	I1003 18:17:35.959437   38063 logs.go:282] 0 containers: []
	W1003 18:17:35.959444   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:35.959448   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:35.959497   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:35.984930   38063 cri.go:89] found id: ""
	I1003 18:17:35.984943   38063 logs.go:282] 0 containers: []
	W1003 18:17:35.984949   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:35.984953   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:35.985011   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:36.010660   38063 cri.go:89] found id: ""
	I1003 18:17:36.010676   38063 logs.go:282] 0 containers: []
	W1003 18:17:36.010685   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:36.010692   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:36.010750   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:36.036836   38063 cri.go:89] found id: ""
	I1003 18:17:36.036851   38063 logs.go:282] 0 containers: []
	W1003 18:17:36.036859   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:36.036865   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:36.036931   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:36.062748   38063 cri.go:89] found id: ""
	I1003 18:17:36.062764   38063 logs.go:282] 0 containers: []
	W1003 18:17:36.062774   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:36.062780   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:36.062832   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:36.088459   38063 cri.go:89] found id: ""
	I1003 18:17:36.088476   38063 logs.go:282] 0 containers: []
	W1003 18:17:36.088485   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:36.088492   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:36.088544   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:36.118150   38063 cri.go:89] found id: ""
	I1003 18:17:36.118166   38063 logs.go:282] 0 containers: []
	W1003 18:17:36.118174   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:36.118183   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:36.118195   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:36.188996   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:36.189016   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:36.201752   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:36.201774   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:36.259714   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:36.253085   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:36.253879   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:36.255461   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:36.255860   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:36.257025   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:36.253085   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:36.253879   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:36.255461   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:36.255860   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:36.257025   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:36.259724   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:36.259734   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:36.319327   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:36.319348   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:38.849084   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:38.860041   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:38.860087   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:38.885371   38063 cri.go:89] found id: ""
	I1003 18:17:38.885387   38063 logs.go:282] 0 containers: []
	W1003 18:17:38.885396   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:38.885403   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:38.885448   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:38.910420   38063 cri.go:89] found id: ""
	I1003 18:17:38.910433   38063 logs.go:282] 0 containers: []
	W1003 18:17:38.910439   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:38.910443   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:38.910492   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:38.935082   38063 cri.go:89] found id: ""
	I1003 18:17:38.935098   38063 logs.go:282] 0 containers: []
	W1003 18:17:38.935113   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:38.935119   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:38.935163   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:38.959589   38063 cri.go:89] found id: ""
	I1003 18:17:38.959605   38063 logs.go:282] 0 containers: []
	W1003 18:17:38.959614   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:38.959620   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:38.959664   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:38.983218   38063 cri.go:89] found id: ""
	I1003 18:17:38.983231   38063 logs.go:282] 0 containers: []
	W1003 18:17:38.983237   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:38.983241   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:38.983283   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:39.007734   38063 cri.go:89] found id: ""
	I1003 18:17:39.007748   38063 logs.go:282] 0 containers: []
	W1003 18:17:39.007754   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:39.007759   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:39.007803   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:39.032274   38063 cri.go:89] found id: ""
	I1003 18:17:39.032288   38063 logs.go:282] 0 containers: []
	W1003 18:17:39.032294   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:39.032301   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:39.032310   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:39.085898   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:39.079359   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:39.079847   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:39.081436   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:39.081830   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:39.083352   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:39.079359   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:39.079847   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:39.081436   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:39.081830   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:39.083352   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:39.085913   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:39.085926   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:39.147336   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:39.147355   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:39.174505   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:39.174520   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:39.236749   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:39.236770   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:41.751919   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:41.762279   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:41.762318   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:41.788348   38063 cri.go:89] found id: ""
	I1003 18:17:41.788364   38063 logs.go:282] 0 containers: []
	W1003 18:17:41.788370   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:41.788375   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:41.788416   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:41.813364   38063 cri.go:89] found id: ""
	I1003 18:17:41.813377   38063 logs.go:282] 0 containers: []
	W1003 18:17:41.813383   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:41.813387   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:41.813428   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:41.838263   38063 cri.go:89] found id: ""
	I1003 18:17:41.838278   38063 logs.go:282] 0 containers: []
	W1003 18:17:41.838286   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:41.838296   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:41.838342   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:41.863852   38063 cri.go:89] found id: ""
	I1003 18:17:41.863866   38063 logs.go:282] 0 containers: []
	W1003 18:17:41.863875   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:41.863880   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:41.863928   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:41.888046   38063 cri.go:89] found id: ""
	I1003 18:17:41.888059   38063 logs.go:282] 0 containers: []
	W1003 18:17:41.888065   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:41.888069   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:41.888123   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:41.912391   38063 cri.go:89] found id: ""
	I1003 18:17:41.912407   38063 logs.go:282] 0 containers: []
	W1003 18:17:41.912414   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:41.912419   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:41.912465   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:41.936635   38063 cri.go:89] found id: ""
	I1003 18:17:41.936652   38063 logs.go:282] 0 containers: []
	W1003 18:17:41.936667   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:41.936673   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:41.936682   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:41.999904   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:41.999923   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:42.010760   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:42.010774   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:42.063379   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:42.056776   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:42.057312   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:42.058864   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:42.059272   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:42.060765   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:42.056776   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:42.057312   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:42.058864   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:42.059272   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:42.060765   12201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:42.063391   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:42.063403   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:42.120707   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:42.120724   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:44.649184   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:44.659323   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:44.659383   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:44.684688   38063 cri.go:89] found id: ""
	I1003 18:17:44.684705   38063 logs.go:282] 0 containers: []
	W1003 18:17:44.684714   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:44.684720   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:44.684766   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:44.709094   38063 cri.go:89] found id: ""
	I1003 18:17:44.709107   38063 logs.go:282] 0 containers: []
	W1003 18:17:44.709113   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:44.709117   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:44.709155   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:44.733401   38063 cri.go:89] found id: ""
	I1003 18:17:44.733417   38063 logs.go:282] 0 containers: []
	W1003 18:17:44.733426   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:44.733430   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:44.733469   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:44.757753   38063 cri.go:89] found id: ""
	I1003 18:17:44.757772   38063 logs.go:282] 0 containers: []
	W1003 18:17:44.757780   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:44.757786   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:44.757841   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:44.781910   38063 cri.go:89] found id: ""
	I1003 18:17:44.781926   38063 logs.go:282] 0 containers: []
	W1003 18:17:44.781933   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:44.781939   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:44.781995   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:44.805801   38063 cri.go:89] found id: ""
	I1003 18:17:44.805820   38063 logs.go:282] 0 containers: []
	W1003 18:17:44.805829   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:44.805835   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:44.805882   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:44.830172   38063 cri.go:89] found id: ""
	I1003 18:17:44.830187   38063 logs.go:282] 0 containers: []
	W1003 18:17:44.830195   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:44.830204   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:44.830218   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:44.898633   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:44.898651   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:44.909788   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:44.909802   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:44.964112   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:44.957005   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:44.957997   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:44.959562   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:44.960003   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:44.961510   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:44.957005   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:44.957997   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:44.959562   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:44.960003   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:44.961510   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:44.964123   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:44.964137   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:45.022483   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:45.022503   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:47.552208   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:47.562597   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:47.562644   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:47.587653   38063 cri.go:89] found id: ""
	I1003 18:17:47.587666   38063 logs.go:282] 0 containers: []
	W1003 18:17:47.587672   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:47.587676   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:47.587722   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:47.611271   38063 cri.go:89] found id: ""
	I1003 18:17:47.611287   38063 logs.go:282] 0 containers: []
	W1003 18:17:47.611294   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:47.611298   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:47.611344   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:47.635604   38063 cri.go:89] found id: ""
	I1003 18:17:47.635617   38063 logs.go:282] 0 containers: []
	W1003 18:17:47.635625   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:47.635631   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:47.635704   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:47.660903   38063 cri.go:89] found id: ""
	I1003 18:17:47.660926   38063 logs.go:282] 0 containers: []
	W1003 18:17:47.660933   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:47.660938   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:47.661007   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:47.686109   38063 cri.go:89] found id: ""
	I1003 18:17:47.686122   38063 logs.go:282] 0 containers: []
	W1003 18:17:47.686129   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:47.686133   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:47.686172   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:47.710137   38063 cri.go:89] found id: ""
	I1003 18:17:47.710153   38063 logs.go:282] 0 containers: []
	W1003 18:17:47.710161   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:47.710167   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:47.710207   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:47.734797   38063 cri.go:89] found id: ""
	I1003 18:17:47.734817   38063 logs.go:282] 0 containers: []
	W1003 18:17:47.734826   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:47.734835   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:47.734849   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:47.745548   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:47.745565   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:47.799254   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:47.792392   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:47.793029   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:47.794533   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:47.794963   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:47.796403   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:47.792392   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:47.793029   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:47.794533   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:47.794963   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:47.796403   12434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:47.799265   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:47.799274   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:47.861703   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:47.861720   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:47.888938   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:47.888953   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:50.454766   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:50.465005   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:50.465050   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:50.489074   38063 cri.go:89] found id: ""
	I1003 18:17:50.489087   38063 logs.go:282] 0 containers: []
	W1003 18:17:50.489093   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:50.489098   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:50.489139   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:50.513935   38063 cri.go:89] found id: ""
	I1003 18:17:50.513950   38063 logs.go:282] 0 containers: []
	W1003 18:17:50.513959   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:50.513964   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:50.514027   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:50.539148   38063 cri.go:89] found id: ""
	I1003 18:17:50.539166   38063 logs.go:282] 0 containers: []
	W1003 18:17:50.539173   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:50.539179   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:50.539220   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:50.562923   38063 cri.go:89] found id: ""
	I1003 18:17:50.562944   38063 logs.go:282] 0 containers: []
	W1003 18:17:50.562950   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:50.562959   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:50.563021   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:50.587009   38063 cri.go:89] found id: ""
	I1003 18:17:50.587022   38063 logs.go:282] 0 containers: []
	W1003 18:17:50.587029   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:50.587033   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:50.587081   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:50.611334   38063 cri.go:89] found id: ""
	I1003 18:17:50.611350   38063 logs.go:282] 0 containers: []
	W1003 18:17:50.611356   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:50.611361   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:50.611410   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:50.634818   38063 cri.go:89] found id: ""
	I1003 18:17:50.634832   38063 logs.go:282] 0 containers: []
	W1003 18:17:50.634839   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:50.634846   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:50.634856   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:50.696044   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:50.696061   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:50.722679   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:50.722696   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:50.789104   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:50.789122   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:50.800113   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:50.800126   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:50.853877   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:50.846722   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:50.847312   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:50.848906   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:50.849353   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:50.851079   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:50.846722   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:50.847312   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:50.848906   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:50.849353   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:50.851079   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:53.354772   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:53.365080   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:53.365139   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:53.389900   38063 cri.go:89] found id: ""
	I1003 18:17:53.389913   38063 logs.go:282] 0 containers: []
	W1003 18:17:53.389920   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:53.389930   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:53.389993   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:53.414775   38063 cri.go:89] found id: ""
	I1003 18:17:53.414790   38063 logs.go:282] 0 containers: []
	W1003 18:17:53.414797   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:53.414801   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:53.414847   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:53.439429   38063 cri.go:89] found id: ""
	I1003 18:17:53.439445   38063 logs.go:282] 0 containers: []
	W1003 18:17:53.439454   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:53.439460   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:53.439506   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:53.464200   38063 cri.go:89] found id: ""
	I1003 18:17:53.464214   38063 logs.go:282] 0 containers: []
	W1003 18:17:53.464220   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:53.464225   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:53.464263   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:53.488529   38063 cri.go:89] found id: ""
	I1003 18:17:53.488542   38063 logs.go:282] 0 containers: []
	W1003 18:17:53.488550   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:53.488556   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:53.488612   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:53.512935   38063 cri.go:89] found id: ""
	I1003 18:17:53.512950   38063 logs.go:282] 0 containers: []
	W1003 18:17:53.512957   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:53.512962   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:53.513028   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:53.536738   38063 cri.go:89] found id: ""
	I1003 18:17:53.536754   38063 logs.go:282] 0 containers: []
	W1003 18:17:53.536763   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:53.536771   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:53.536784   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:53.602221   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:53.602237   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:53.613558   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:53.613573   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:53.667019   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:53.660222   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:53.660704   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:53.662310   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:53.662769   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:53.664227   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:53.660222   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:53.660704   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:53.662310   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:53.662769   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:53.664227   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:53.667029   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:53.667039   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:53.725461   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:53.725480   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:56.254692   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:56.264956   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:56.265017   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:56.289747   38063 cri.go:89] found id: ""
	I1003 18:17:56.289764   38063 logs.go:282] 0 containers: []
	W1003 18:17:56.289772   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:56.289779   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:56.289821   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:56.314478   38063 cri.go:89] found id: ""
	I1003 18:17:56.314493   38063 logs.go:282] 0 containers: []
	W1003 18:17:56.314501   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:56.314507   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:56.314557   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:56.338961   38063 cri.go:89] found id: ""
	I1003 18:17:56.338989   38063 logs.go:282] 0 containers: []
	W1003 18:17:56.338998   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:56.339004   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:56.339046   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:56.364770   38063 cri.go:89] found id: ""
	I1003 18:17:56.364784   38063 logs.go:282] 0 containers: []
	W1003 18:17:56.364789   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:56.364793   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:56.364832   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:56.391018   38063 cri.go:89] found id: ""
	I1003 18:17:56.391031   38063 logs.go:282] 0 containers: []
	W1003 18:17:56.391037   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:56.391041   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:56.391081   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:56.415373   38063 cri.go:89] found id: ""
	I1003 18:17:56.415389   38063 logs.go:282] 0 containers: []
	W1003 18:17:56.415398   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:56.415405   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:56.415447   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:56.439537   38063 cri.go:89] found id: ""
	I1003 18:17:56.439554   38063 logs.go:282] 0 containers: []
	W1003 18:17:56.439564   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:56.439572   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:56.439584   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:56.506236   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:56.506256   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:56.517260   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:56.517274   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:56.570626   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:56.564107   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:56.564604   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:56.566115   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:56.566514   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:56.568021   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:56.564107   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:56.564604   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:56.566115   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:56.566514   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:56.568021   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:56.570639   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:56.570658   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:56.633346   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:56.633369   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:17:59.161404   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:17:59.171988   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:17:59.172046   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:17:59.196437   38063 cri.go:89] found id: ""
	I1003 18:17:59.196449   38063 logs.go:282] 0 containers: []
	W1003 18:17:59.196455   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:17:59.196459   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:17:59.196498   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:17:59.220855   38063 cri.go:89] found id: ""
	I1003 18:17:59.220868   38063 logs.go:282] 0 containers: []
	W1003 18:17:59.220874   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:17:59.220878   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:17:59.220926   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:17:59.246564   38063 cri.go:89] found id: ""
	I1003 18:17:59.246579   38063 logs.go:282] 0 containers: []
	W1003 18:17:59.246587   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:17:59.246595   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:17:59.246655   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:17:59.271407   38063 cri.go:89] found id: ""
	I1003 18:17:59.271422   38063 logs.go:282] 0 containers: []
	W1003 18:17:59.271428   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:17:59.271433   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:17:59.271474   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:17:59.295265   38063 cri.go:89] found id: ""
	I1003 18:17:59.295281   38063 logs.go:282] 0 containers: []
	W1003 18:17:59.295290   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:17:59.295297   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:17:59.295344   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:17:59.319819   38063 cri.go:89] found id: ""
	I1003 18:17:59.319835   38063 logs.go:282] 0 containers: []
	W1003 18:17:59.319849   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:17:59.319853   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:17:59.319893   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:17:59.344045   38063 cri.go:89] found id: ""
	I1003 18:17:59.344058   38063 logs.go:282] 0 containers: []
	W1003 18:17:59.344064   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:17:59.344071   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:17:59.344080   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:17:59.411448   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:17:59.411465   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:17:59.422319   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:17:59.422332   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:17:59.475228   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:17:59.468454   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:59.468914   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:59.470455   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:59.470862   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:59.472347   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:17:59.468454   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:59.468914   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:59.470455   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:59.470862   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:17:59.472347   12932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:17:59.475255   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:17:59.475270   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:17:59.536088   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:17:59.536106   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:02.065737   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:02.076173   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:02.076214   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:02.101478   38063 cri.go:89] found id: ""
	I1003 18:18:02.101495   38063 logs.go:282] 0 containers: []
	W1003 18:18:02.101505   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:02.101513   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:02.101556   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:02.126528   38063 cri.go:89] found id: ""
	I1003 18:18:02.126541   38063 logs.go:282] 0 containers: []
	W1003 18:18:02.126547   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:02.126551   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:02.126591   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:02.150958   38063 cri.go:89] found id: ""
	I1003 18:18:02.150971   38063 logs.go:282] 0 containers: []
	W1003 18:18:02.150997   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:02.151003   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:02.151051   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:02.176464   38063 cri.go:89] found id: ""
	I1003 18:18:02.176478   38063 logs.go:282] 0 containers: []
	W1003 18:18:02.176485   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:02.176497   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:02.176539   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:02.201345   38063 cri.go:89] found id: ""
	I1003 18:18:02.201361   38063 logs.go:282] 0 containers: []
	W1003 18:18:02.201368   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:02.201373   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:02.201415   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:02.227338   38063 cri.go:89] found id: ""
	I1003 18:18:02.227352   38063 logs.go:282] 0 containers: []
	W1003 18:18:02.227359   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:02.227363   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:02.227407   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:02.253859   38063 cri.go:89] found id: ""
	I1003 18:18:02.253875   38063 logs.go:282] 0 containers: []
	W1003 18:18:02.253882   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:02.253890   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:02.253902   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:02.314960   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:02.314986   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:02.343587   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:02.343605   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:02.412159   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:02.412178   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:02.423525   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:02.423542   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:02.480478   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:02.473940   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:02.474565   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:02.476146   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:02.476539   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:02.477814   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:02.473940   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:02.474565   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:02.476146   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:02.476539   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:02.477814   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:04.981110   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:04.992430   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:04.992470   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:05.019218   38063 cri.go:89] found id: ""
	I1003 18:18:05.019232   38063 logs.go:282] 0 containers: []
	W1003 18:18:05.019238   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:05.019243   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:05.019282   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:05.042823   38063 cri.go:89] found id: ""
	I1003 18:18:05.042836   38063 logs.go:282] 0 containers: []
	W1003 18:18:05.042841   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:05.042845   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:05.042902   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:05.069124   38063 cri.go:89] found id: ""
	I1003 18:18:05.069141   38063 logs.go:282] 0 containers: []
	W1003 18:18:05.069148   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:05.069152   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:05.069196   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:05.093833   38063 cri.go:89] found id: ""
	I1003 18:18:05.093848   38063 logs.go:282] 0 containers: []
	W1003 18:18:05.093856   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:05.093862   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:05.093932   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:05.119454   38063 cri.go:89] found id: ""
	I1003 18:18:05.119468   38063 logs.go:282] 0 containers: []
	W1003 18:18:05.119475   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:05.119479   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:05.119523   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:05.143897   38063 cri.go:89] found id: ""
	I1003 18:18:05.143914   38063 logs.go:282] 0 containers: []
	W1003 18:18:05.143920   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:05.143925   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:05.143966   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:05.167637   38063 cri.go:89] found id: ""
	I1003 18:18:05.167650   38063 logs.go:282] 0 containers: []
	W1003 18:18:05.167656   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:05.167663   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:05.167674   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:05.195697   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:05.195715   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:05.260408   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:05.260428   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:05.271292   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:05.271309   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:05.324867   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:05.318440   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:05.318912   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:05.320332   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:05.320733   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:05.322261   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:05.318440   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:05.318912   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:05.320332   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:05.320733   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:05.322261   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:05.324886   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:05.324898   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:07.885833   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:07.895849   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:07.895957   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:07.921467   38063 cri.go:89] found id: ""
	I1003 18:18:07.921479   38063 logs.go:282] 0 containers: []
	W1003 18:18:07.921485   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:07.921490   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:07.921545   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:07.945467   38063 cri.go:89] found id: ""
	I1003 18:18:07.945480   38063 logs.go:282] 0 containers: []
	W1003 18:18:07.945487   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:07.945492   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:07.945539   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:07.970084   38063 cri.go:89] found id: ""
	I1003 18:18:07.970098   38063 logs.go:282] 0 containers: []
	W1003 18:18:07.970105   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:07.970110   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:07.970148   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:07.994263   38063 cri.go:89] found id: ""
	I1003 18:18:07.994278   38063 logs.go:282] 0 containers: []
	W1003 18:18:07.994287   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:07.994293   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:07.994334   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:08.018778   38063 cri.go:89] found id: ""
	I1003 18:18:08.018793   38063 logs.go:282] 0 containers: []
	W1003 18:18:08.018800   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:08.018805   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:08.018844   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:08.043138   38063 cri.go:89] found id: ""
	I1003 18:18:08.043153   38063 logs.go:282] 0 containers: []
	W1003 18:18:08.043159   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:08.043164   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:08.043203   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:08.067785   38063 cri.go:89] found id: ""
	I1003 18:18:08.067799   38063 logs.go:282] 0 containers: []
	W1003 18:18:08.067805   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:08.067811   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:08.067820   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:08.136408   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:08.136429   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:08.147427   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:08.147445   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:08.201110   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:08.194693   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:08.195161   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:08.196715   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:08.197135   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:08.198610   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:08.194693   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:08.195161   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:08.196715   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:08.197135   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:08.198610   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:08.201124   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:08.201135   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:08.261991   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:08.262010   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:10.791196   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:10.801467   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:10.801525   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:10.827655   38063 cri.go:89] found id: ""
	I1003 18:18:10.827672   38063 logs.go:282] 0 containers: []
	W1003 18:18:10.827683   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:10.827688   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:10.827735   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:10.852558   38063 cri.go:89] found id: ""
	I1003 18:18:10.852574   38063 logs.go:282] 0 containers: []
	W1003 18:18:10.852582   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:10.852588   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:10.852638   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:10.876842   38063 cri.go:89] found id: ""
	I1003 18:18:10.876858   38063 logs.go:282] 0 containers: []
	W1003 18:18:10.876870   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:10.876874   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:10.876918   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:10.902827   38063 cri.go:89] found id: ""
	I1003 18:18:10.902840   38063 logs.go:282] 0 containers: []
	W1003 18:18:10.902846   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:10.902851   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:10.902890   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:10.927840   38063 cri.go:89] found id: ""
	I1003 18:18:10.927855   38063 logs.go:282] 0 containers: []
	W1003 18:18:10.927861   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:10.927865   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:10.927909   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:10.952535   38063 cri.go:89] found id: ""
	I1003 18:18:10.952550   38063 logs.go:282] 0 containers: []
	W1003 18:18:10.952556   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:10.952561   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:10.952602   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:10.976585   38063 cri.go:89] found id: ""
	I1003 18:18:10.976601   38063 logs.go:282] 0 containers: []
	W1003 18:18:10.976610   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:10.976620   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:10.976631   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:10.987359   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:10.987373   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:11.041048   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:11.034604   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:11.035105   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:11.036603   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:11.036989   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:11.038508   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:11.034604   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:11.035105   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:11.036603   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:11.036989   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:11.038508   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:11.041058   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:11.041068   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:11.101637   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:11.101658   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:11.128867   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:11.128885   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:13.697689   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:13.708864   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:13.708949   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:13.733837   38063 cri.go:89] found id: ""
	I1003 18:18:13.733851   38063 logs.go:282] 0 containers: []
	W1003 18:18:13.733857   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:13.733864   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:13.733915   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:13.757681   38063 cri.go:89] found id: ""
	I1003 18:18:13.757698   38063 logs.go:282] 0 containers: []
	W1003 18:18:13.757707   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:13.757713   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:13.757778   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:13.782545   38063 cri.go:89] found id: ""
	I1003 18:18:13.782560   38063 logs.go:282] 0 containers: []
	W1003 18:18:13.782572   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:13.782576   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:13.782624   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:13.806939   38063 cri.go:89] found id: ""
	I1003 18:18:13.806955   38063 logs.go:282] 0 containers: []
	W1003 18:18:13.806964   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:13.806970   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:13.807041   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:13.831768   38063 cri.go:89] found id: ""
	I1003 18:18:13.831783   38063 logs.go:282] 0 containers: []
	W1003 18:18:13.831790   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:13.831795   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:13.831837   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:13.856076   38063 cri.go:89] found id: ""
	I1003 18:18:13.856093   38063 logs.go:282] 0 containers: []
	W1003 18:18:13.856101   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:13.856107   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:13.856163   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:13.879410   38063 cri.go:89] found id: ""
	I1003 18:18:13.879423   38063 logs.go:282] 0 containers: []
	W1003 18:18:13.879430   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:13.879438   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:13.879450   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:13.944708   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:13.944727   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:13.956175   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:13.956194   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:14.010487   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:14.003834   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:14.004418   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:14.005911   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:14.006368   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:14.007894   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:14.003834   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:14.004418   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:14.005911   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:14.006368   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:14.007894   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:14.010499   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:14.010514   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:14.071892   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:14.071911   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:16.601878   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:16.612139   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:16.612183   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:16.635115   38063 cri.go:89] found id: ""
	I1003 18:18:16.635128   38063 logs.go:282] 0 containers: []
	W1003 18:18:16.635134   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:16.635139   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:16.635180   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:16.660332   38063 cri.go:89] found id: ""
	I1003 18:18:16.660347   38063 logs.go:282] 0 containers: []
	W1003 18:18:16.660354   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:16.660361   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:16.660416   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:16.683528   38063 cri.go:89] found id: ""
	I1003 18:18:16.683551   38063 logs.go:282] 0 containers: []
	W1003 18:18:16.683560   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:16.683566   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:16.683619   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:16.708287   38063 cri.go:89] found id: ""
	I1003 18:18:16.708304   38063 logs.go:282] 0 containers: []
	W1003 18:18:16.708313   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:16.708319   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:16.708368   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:16.732627   38063 cri.go:89] found id: ""
	I1003 18:18:16.732642   38063 logs.go:282] 0 containers: []
	W1003 18:18:16.732651   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:16.732670   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:16.732712   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:16.757768   38063 cri.go:89] found id: ""
	I1003 18:18:16.757782   38063 logs.go:282] 0 containers: []
	W1003 18:18:16.757788   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:16.757793   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:16.757836   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:16.781970   38063 cri.go:89] found id: ""
	I1003 18:18:16.781997   38063 logs.go:282] 0 containers: []
	W1003 18:18:16.782011   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:16.782020   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:16.782036   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:16.850796   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:16.850813   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:16.862129   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:16.862143   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:16.915039   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:16.908470   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:16.908860   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:16.910345   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:16.910711   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:16.912263   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:16.908470   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:16.908860   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:16.910345   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:16.910711   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:16.912263   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:16.915050   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:16.915063   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:16.972388   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:16.972405   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:19.502094   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:19.512481   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:19.512541   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:19.537212   38063 cri.go:89] found id: ""
	I1003 18:18:19.537228   38063 logs.go:282] 0 containers: []
	W1003 18:18:19.537236   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:19.537242   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:19.537305   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:19.561717   38063 cri.go:89] found id: ""
	I1003 18:18:19.561734   38063 logs.go:282] 0 containers: []
	W1003 18:18:19.561741   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:19.561746   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:19.561793   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:19.585423   38063 cri.go:89] found id: ""
	I1003 18:18:19.585436   38063 logs.go:282] 0 containers: []
	W1003 18:18:19.585443   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:19.585447   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:19.585490   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:19.609708   38063 cri.go:89] found id: ""
	I1003 18:18:19.609722   38063 logs.go:282] 0 containers: []
	W1003 18:18:19.609728   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:19.609733   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:19.609772   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:19.632853   38063 cri.go:89] found id: ""
	I1003 18:18:19.632869   38063 logs.go:282] 0 containers: []
	W1003 18:18:19.632878   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:19.632884   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:19.632933   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:19.656204   38063 cri.go:89] found id: ""
	I1003 18:18:19.656220   38063 logs.go:282] 0 containers: []
	W1003 18:18:19.656228   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:19.656235   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:19.656287   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:19.680640   38063 cri.go:89] found id: ""
	I1003 18:18:19.680663   38063 logs.go:282] 0 containers: []
	W1003 18:18:19.680669   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:19.680677   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:19.680689   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:19.707259   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:19.707275   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:19.774362   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:19.774380   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:19.785563   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:19.785577   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:19.839901   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:19.833112   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:19.833732   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:19.835306   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:19.835682   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:19.837164   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:19.833112   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:19.833732   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:19.835306   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:19.835682   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:19.837164   13812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:19.839911   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:19.839921   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:22.400537   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:22.410712   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:22.410758   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:22.434956   38063 cri.go:89] found id: ""
	I1003 18:18:22.434970   38063 logs.go:282] 0 containers: []
	W1003 18:18:22.434988   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:22.434995   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:22.435050   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:22.459920   38063 cri.go:89] found id: ""
	I1003 18:18:22.459936   38063 logs.go:282] 0 containers: []
	W1003 18:18:22.459945   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:22.459950   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:22.460011   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:22.484807   38063 cri.go:89] found id: ""
	I1003 18:18:22.484821   38063 logs.go:282] 0 containers: []
	W1003 18:18:22.484827   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:22.484832   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:22.484876   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:22.510038   38063 cri.go:89] found id: ""
	I1003 18:18:22.510055   38063 logs.go:282] 0 containers: []
	W1003 18:18:22.510063   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:22.510069   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:22.510127   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:22.534586   38063 cri.go:89] found id: ""
	I1003 18:18:22.534606   38063 logs.go:282] 0 containers: []
	W1003 18:18:22.534616   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:22.534622   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:22.534684   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:22.559759   38063 cri.go:89] found id: ""
	I1003 18:18:22.559776   38063 logs.go:282] 0 containers: []
	W1003 18:18:22.559785   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:22.559791   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:22.559847   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:22.584554   38063 cri.go:89] found id: ""
	I1003 18:18:22.584569   38063 logs.go:282] 0 containers: []
	W1003 18:18:22.584579   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:22.584588   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:22.584602   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:22.653550   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:22.653568   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:22.664744   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:22.664760   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:22.718670   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:22.712190   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:22.712660   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:22.714209   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:22.714609   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:22.716119   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:22.712190   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:22.712660   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:22.714209   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:22.714609   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:22.716119   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:22.718679   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:22.718689   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:22.781634   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:22.781662   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:25.311342   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:25.321538   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:25.321589   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:25.347212   38063 cri.go:89] found id: ""
	I1003 18:18:25.347228   38063 logs.go:282] 0 containers: []
	W1003 18:18:25.347237   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:25.347244   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:25.347288   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:25.373240   38063 cri.go:89] found id: ""
	I1003 18:18:25.373255   38063 logs.go:282] 0 containers: []
	W1003 18:18:25.373261   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:25.373265   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:25.373316   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:25.398262   38063 cri.go:89] found id: ""
	I1003 18:18:25.398280   38063 logs.go:282] 0 containers: []
	W1003 18:18:25.398287   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:25.398293   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:25.398340   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:25.423522   38063 cri.go:89] found id: ""
	I1003 18:18:25.423536   38063 logs.go:282] 0 containers: []
	W1003 18:18:25.423544   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:25.423550   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:25.423609   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:25.448232   38063 cri.go:89] found id: ""
	I1003 18:18:25.448249   38063 logs.go:282] 0 containers: []
	W1003 18:18:25.448258   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:25.448264   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:25.448311   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:25.474690   38063 cri.go:89] found id: ""
	I1003 18:18:25.474704   38063 logs.go:282] 0 containers: []
	W1003 18:18:25.474710   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:25.474716   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:25.474766   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:25.499693   38063 cri.go:89] found id: ""
	I1003 18:18:25.499707   38063 logs.go:282] 0 containers: []
	W1003 18:18:25.499715   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:25.499723   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:25.499733   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:25.526210   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:25.526225   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:25.595354   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:25.595373   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:25.606969   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:25.606998   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:25.662186   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:25.655368   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:25.655970   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:25.657492   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:25.657931   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:25.659386   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:25.655368   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:25.655970   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:25.657492   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:25.657931   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:25.659386   14051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:25.662197   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:25.662206   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:28.226017   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:28.237132   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:28.237175   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:28.262449   38063 cri.go:89] found id: ""
	I1003 18:18:28.262466   38063 logs.go:282] 0 containers: []
	W1003 18:18:28.262474   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:28.262479   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:28.262524   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:28.287653   38063 cri.go:89] found id: ""
	I1003 18:18:28.287669   38063 logs.go:282] 0 containers: []
	W1003 18:18:28.287679   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:28.287685   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:28.287730   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:28.313255   38063 cri.go:89] found id: ""
	I1003 18:18:28.313269   38063 logs.go:282] 0 containers: []
	W1003 18:18:28.313276   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:28.313280   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:28.313321   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:28.338727   38063 cri.go:89] found id: ""
	I1003 18:18:28.338742   38063 logs.go:282] 0 containers: []
	W1003 18:18:28.338748   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:28.338752   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:28.338793   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:28.363285   38063 cri.go:89] found id: ""
	I1003 18:18:28.363303   38063 logs.go:282] 0 containers: []
	W1003 18:18:28.363312   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:28.363317   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:28.363359   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:28.388945   38063 cri.go:89] found id: ""
	I1003 18:18:28.388958   38063 logs.go:282] 0 containers: []
	W1003 18:18:28.388964   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:28.388969   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:28.389039   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:28.414591   38063 cri.go:89] found id: ""
	I1003 18:18:28.414607   38063 logs.go:282] 0 containers: []
	W1003 18:18:28.414614   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:28.414621   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:28.414630   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:28.425367   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:28.425382   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:28.479472   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:28.472065   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:28.472604   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:28.474900   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:28.475366   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:28.476874   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:28.472065   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:28.472604   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:28.474900   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:28.475366   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:28.476874   14154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:28.479481   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:28.479491   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:28.538844   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:28.538865   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:28.567294   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:28.567309   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:31.138009   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:31.148430   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:18:31.148480   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:18:31.173355   38063 cri.go:89] found id: ""
	I1003 18:18:31.173368   38063 logs.go:282] 0 containers: []
	W1003 18:18:31.173375   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:18:31.173380   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:18:31.173418   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:18:31.198151   38063 cri.go:89] found id: ""
	I1003 18:18:31.198166   38063 logs.go:282] 0 containers: []
	W1003 18:18:31.198181   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:18:31.198187   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:18:31.198231   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:18:31.223275   38063 cri.go:89] found id: ""
	I1003 18:18:31.223290   38063 logs.go:282] 0 containers: []
	W1003 18:18:31.223296   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:18:31.223300   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:18:31.223343   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:18:31.247221   38063 cri.go:89] found id: ""
	I1003 18:18:31.247237   38063 logs.go:282] 0 containers: []
	W1003 18:18:31.247248   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:18:31.247253   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:18:31.247310   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:18:31.270563   38063 cri.go:89] found id: ""
	I1003 18:18:31.270576   38063 logs.go:282] 0 containers: []
	W1003 18:18:31.270582   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:18:31.270586   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:18:31.270636   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:18:31.295134   38063 cri.go:89] found id: ""
	I1003 18:18:31.295150   38063 logs.go:282] 0 containers: []
	W1003 18:18:31.295159   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:18:31.295165   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:18:31.295204   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:18:31.319654   38063 cri.go:89] found id: ""
	I1003 18:18:31.319668   38063 logs.go:282] 0 containers: []
	W1003 18:18:31.319675   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:18:31.319683   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:18:31.319698   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:18:31.386428   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:18:31.386448   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:18:31.397662   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:18:31.397677   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:18:31.451288   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:18:31.444650   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:31.445190   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:31.446750   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:31.447199   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:31.448658   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:18:31.444650   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:31.445190   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:31.446750   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:31.447199   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:18:31.448658   14290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:18:31.451299   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:18:31.451309   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:18:31.510468   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:18:31.510487   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:18:34.039627   38063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:18:34.050185   38063 kubeadm.go:601] duration metric: took 4m1.950557888s to restartPrimaryControlPlane
	W1003 18:18:34.050251   38063 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1003 18:18:34.050324   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1003 18:18:34.501082   38063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:18:34.513430   38063 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 18:18:34.521102   38063 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:18:34.521139   38063 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:18:34.528531   38063 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:18:34.528540   38063 kubeadm.go:157] found existing configuration files:
	
	I1003 18:18:34.528574   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1003 18:18:34.535908   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:18:34.535967   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:18:34.543072   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1003 18:18:34.550220   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:18:34.550263   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:18:34.557251   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1003 18:18:34.565090   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:18:34.565130   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:18:34.571882   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1003 18:18:34.579174   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:18:34.579210   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:18:34.585996   38063 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:18:34.620715   38063 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:18:34.620773   38063 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:18:34.639243   38063 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:18:34.639317   38063 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 18:18:34.639360   38063 kubeadm.go:318] OS: Linux
	I1003 18:18:34.639397   38063 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:18:34.639466   38063 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:18:34.639529   38063 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:18:34.639587   38063 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:18:34.639687   38063 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:18:34.639749   38063 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:18:34.639803   38063 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:18:34.639863   38063 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 18:18:34.692781   38063 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:18:34.692898   38063 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:18:34.693025   38063 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:18:34.699300   38063 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:18:34.703358   38063 out.go:252]   - Generating certificates and keys ...
	I1003 18:18:34.703438   38063 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:18:34.703491   38063 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:18:34.703553   38063 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1003 18:18:34.703602   38063 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1003 18:18:34.703664   38063 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1003 18:18:34.703733   38063 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1003 18:18:34.703790   38063 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1003 18:18:34.703840   38063 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1003 18:18:34.703900   38063 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1003 18:18:34.703962   38063 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1003 18:18:34.704000   38063 kubeadm.go:318] [certs] Using the existing "sa" key
	I1003 18:18:34.704043   38063 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:18:34.953422   38063 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:18:35.214353   38063 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:18:35.447415   38063 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:18:35.645347   38063 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:18:36.220332   38063 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:18:36.220714   38063 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:18:36.222788   38063 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:18:36.225372   38063 out.go:252]   - Booting up control plane ...
	I1003 18:18:36.225492   38063 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:18:36.225605   38063 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:18:36.225672   38063 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:18:36.237955   38063 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:18:36.238117   38063 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:18:36.244390   38063 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:18:36.244573   38063 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:18:36.244608   38063 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:18:36.339701   38063 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:18:36.339860   38063 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:18:36.841336   38063 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.785786ms
	I1003 18:18:36.845100   38063 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:18:36.845207   38063 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1003 18:18:36.845308   38063 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:18:36.845418   38063 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:22:36.846410   38063 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001254073s
	I1003 18:22:36.846572   38063 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001316832s
	I1003 18:22:36.846680   38063 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00135784s
	I1003 18:22:36.846684   38063 kubeadm.go:318] 
	I1003 18:22:36.846803   38063 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 18:22:36.846887   38063 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:22:36.847019   38063 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 18:22:36.847152   38063 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 18:22:36.847221   38063 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 18:22:36.847290   38063 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 18:22:36.847293   38063 kubeadm.go:318] 
	I1003 18:22:36.850267   38063 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 18:22:36.850420   38063 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:22:36.851109   38063 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1003 18:22:36.851222   38063 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1003 18:22:36.851310   38063 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.785786ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001254073s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001316832s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00135784s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1003 18:22:36.851378   38063 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1003 18:22:37.292774   38063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:22:37.305190   38063 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:22:37.305239   38063 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:22:37.312706   38063 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:22:37.312714   38063 kubeadm.go:157] found existing configuration files:
	
	I1003 18:22:37.312747   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1003 18:22:37.319873   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:22:37.319914   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:22:37.326628   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1003 18:22:37.333616   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:22:37.333654   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:22:37.340503   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1003 18:22:37.347489   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:22:37.347533   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:22:37.354448   38063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1003 18:22:37.361615   38063 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:22:37.361649   38063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:22:37.368313   38063 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:22:37.421185   38063 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 18:22:37.475455   38063 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:26:40.291288   38063 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1003 18:26:40.291385   38063 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1003 18:26:40.294089   38063 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:26:40.294149   38063 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:26:40.294247   38063 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:26:40.294331   38063 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 18:26:40.294363   38063 kubeadm.go:318] OS: Linux
	I1003 18:26:40.294399   38063 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:26:40.294467   38063 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:26:40.294515   38063 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:26:40.294554   38063 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:26:40.294601   38063 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:26:40.294658   38063 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:26:40.294706   38063 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:26:40.294741   38063 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 18:26:40.294849   38063 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:26:40.294960   38063 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:26:40.295057   38063 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:26:40.295109   38063 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:26:40.297835   38063 out.go:252]   - Generating certificates and keys ...
	I1003 18:26:40.297914   38063 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:26:40.297990   38063 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:26:40.298082   38063 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1003 18:26:40.298152   38063 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1003 18:26:40.298217   38063 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1003 18:26:40.298275   38063 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1003 18:26:40.298326   38063 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1003 18:26:40.298376   38063 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1003 18:26:40.298444   38063 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1003 18:26:40.298519   38063 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1003 18:26:40.298554   38063 kubeadm.go:318] [certs] Using the existing "sa" key
	I1003 18:26:40.298605   38063 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:26:40.298646   38063 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:26:40.298698   38063 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:26:40.298740   38063 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:26:40.298791   38063 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:26:40.298839   38063 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:26:40.298907   38063 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:26:40.298998   38063 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:26:40.300468   38063 out.go:252]   - Booting up control plane ...
	I1003 18:26:40.300542   38063 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:26:40.300632   38063 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:26:40.300695   38063 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:26:40.300779   38063 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:26:40.300871   38063 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:26:40.300963   38063 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:26:40.301061   38063 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:26:40.301100   38063 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:26:40.301207   38063 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:26:40.301294   38063 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:26:40.301341   38063 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500810972s
	I1003 18:26:40.301415   38063 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:26:40.301479   38063 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1003 18:26:40.301550   38063 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:26:40.301629   38063 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:26:40.301688   38063 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001083242s
	I1003 18:26:40.301753   38063 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001112366s
	I1003 18:26:40.301845   38063 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001257154s
	I1003 18:26:40.301849   38063 kubeadm.go:318] 
	I1003 18:26:40.301925   38063 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 18:26:40.302009   38063 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:26:40.302080   38063 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 18:26:40.302157   38063 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 18:26:40.302217   38063 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 18:26:40.302288   38063 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 18:26:40.302308   38063 kubeadm.go:318] 
	I1003 18:26:40.302352   38063 kubeadm.go:402] duration metric: took 12m8.237590419s to StartCluster
	I1003 18:26:40.302401   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:26:40.302450   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:26:40.329135   38063 cri.go:89] found id: ""
	I1003 18:26:40.329148   38063 logs.go:282] 0 containers: []
	W1003 18:26:40.329154   38063 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:26:40.329160   38063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:26:40.329203   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:26:40.354340   38063 cri.go:89] found id: ""
	I1003 18:26:40.354354   38063 logs.go:282] 0 containers: []
	W1003 18:26:40.354361   38063 logs.go:284] No container was found matching "etcd"
	I1003 18:26:40.354366   38063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:26:40.354419   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:26:40.380556   38063 cri.go:89] found id: ""
	I1003 18:26:40.380570   38063 logs.go:282] 0 containers: []
	W1003 18:26:40.380576   38063 logs.go:284] No container was found matching "coredns"
	I1003 18:26:40.380581   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:26:40.380640   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:26:40.406655   38063 cri.go:89] found id: ""
	I1003 18:26:40.406670   38063 logs.go:282] 0 containers: []
	W1003 18:26:40.406677   38063 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:26:40.406683   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:26:40.406728   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:26:40.432698   38063 cri.go:89] found id: ""
	I1003 18:26:40.432713   38063 logs.go:282] 0 containers: []
	W1003 18:26:40.432720   38063 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:26:40.432725   38063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:26:40.432769   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:26:40.459363   38063 cri.go:89] found id: ""
	I1003 18:26:40.459378   38063 logs.go:282] 0 containers: []
	W1003 18:26:40.459384   38063 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:26:40.459390   38063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:26:40.459437   38063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:26:40.484951   38063 cri.go:89] found id: ""
	I1003 18:26:40.484964   38063 logs.go:282] 0 containers: []
	W1003 18:26:40.484971   38063 logs.go:284] No container was found matching "kindnet"
	I1003 18:26:40.484997   38063 logs.go:123] Gathering logs for kubelet ...
	I1003 18:26:40.485019   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:26:40.549245   38063 logs.go:123] Gathering logs for dmesg ...
	I1003 18:26:40.549263   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:26:40.560727   38063 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:26:40.560741   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:26:40.616474   38063 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:26:40.609386   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:40.610009   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:40.611564   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:40.611939   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:40.613451   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:26:40.609386   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:40.610009   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:40.611564   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:40.611939   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:40.613451   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:26:40.616500   38063 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:26:40.616509   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:26:40.676470   38063 logs.go:123] Gathering logs for container status ...
	I1003 18:26:40.676488   38063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1003 18:26:40.704576   38063 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500810972s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001083242s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001112366s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001257154s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1003 18:26:40.704638   38063 out.go:285] * 
	W1003 18:26:40.704701   38063 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500810972s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001083242s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001112366s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001257154s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:26:40.704715   38063 out.go:285] * 
	W1003 18:26:40.706538   38063 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:26:40.710390   38063 out.go:203] 
	W1003 18:26:40.711880   38063 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500810972s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001083242s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001112366s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001257154s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:26:40.711903   38063 out.go:285] * 
	I1003 18:26:40.714182   38063 out.go:203] 
	
	
	==> CRI-O <==
	Oct 03 18:26:45 functional-889240 crio[5881]: time="2025-10-03T18:26:45.931429224Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:26:45 functional-889240 crio[5881]: time="2025-10-03T18:26:45.931822193Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:26:45 functional-889240 crio[5881]: time="2025-10-03T18:26:45.945443713Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=38d46a06-9206-4a4f-aaf3-386decd6e066 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:45 functional-889240 crio[5881]: time="2025-10-03T18:26:45.946840363Z" level=info msg="createCtr: deleting container ID e29d3c808ca4ed2099155dc74c0f471406c1f819a0cf713cc8c8b8c809937a8a from idIndex" id=38d46a06-9206-4a4f-aaf3-386decd6e066 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:45 functional-889240 crio[5881]: time="2025-10-03T18:26:45.94687779Z" level=info msg="createCtr: removing container e29d3c808ca4ed2099155dc74c0f471406c1f819a0cf713cc8c8b8c809937a8a" id=38d46a06-9206-4a4f-aaf3-386decd6e066 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:45 functional-889240 crio[5881]: time="2025-10-03T18:26:45.946914829Z" level=info msg="createCtr: deleting container e29d3c808ca4ed2099155dc74c0f471406c1f819a0cf713cc8c8b8c809937a8a from storage" id=38d46a06-9206-4a4f-aaf3-386decd6e066 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:45 functional-889240 crio[5881]: time="2025-10-03T18:26:45.949621507Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-889240_kube-system_7e715cb6024854d45a9fa99576167e43_0" id=38d46a06-9206-4a4f-aaf3-386decd6e066 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:47 functional-889240 crio[5881]: time="2025-10-03T18:26:47.925524695Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=35d708ff-eb22-4c83-b820-446d7ff7e706 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:47 functional-889240 crio[5881]: time="2025-10-03T18:26:47.92749918Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=390851b0-3fcf-4dd1-80e9-d4075f398bc9 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:47 functional-889240 crio[5881]: time="2025-10-03T18:26:47.929741602Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-889240/kube-scheduler" id=5d992ca5-2aea-4abc-9c2e-e9d6c37197b3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:47 functional-889240 crio[5881]: time="2025-10-03T18:26:47.93004487Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:26:47 functional-889240 crio[5881]: time="2025-10-03T18:26:47.931776938Z" level=info msg="Checking image status: kicbase/echo-server:functional-889240" id=8953932c-ee8a-42e0-b282-5d2f8e7e8281 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:47 functional-889240 crio[5881]: time="2025-10-03T18:26:47.940096555Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:26:47 functional-889240 crio[5881]: time="2025-10-03T18:26:47.94071314Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:26:47 functional-889240 crio[5881]: time="2025-10-03T18:26:47.960791317Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=5d992ca5-2aea-4abc-9c2e-e9d6c37197b3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:47 functional-889240 crio[5881]: time="2025-10-03T18:26:47.96366899Z" level=info msg="createCtr: deleting container ID ecf699be96c08531ef2a56e75045cfa005a4be34ea3dda117c7359569fad7274 from idIndex" id=5d992ca5-2aea-4abc-9c2e-e9d6c37197b3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:47 functional-889240 crio[5881]: time="2025-10-03T18:26:47.963723389Z" level=info msg="createCtr: removing container ecf699be96c08531ef2a56e75045cfa005a4be34ea3dda117c7359569fad7274" id=5d992ca5-2aea-4abc-9c2e-e9d6c37197b3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:47 functional-889240 crio[5881]: time="2025-10-03T18:26:47.963770695Z" level=info msg="createCtr: deleting container ecf699be96c08531ef2a56e75045cfa005a4be34ea3dda117c7359569fad7274 from storage" id=5d992ca5-2aea-4abc-9c2e-e9d6c37197b3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:47 functional-889240 crio[5881]: time="2025-10-03T18:26:47.967366789Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-889240_kube-system_7dadd1df42d6a2c3d1907f134f7d5ea7_0" id=5d992ca5-2aea-4abc-9c2e-e9d6c37197b3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:26:47 functional-889240 crio[5881]: time="2025-10-03T18:26:47.974729556Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-889240" id=270d5a1b-231e-4439-9186-da7d5d315b42 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:47 functional-889240 crio[5881]: time="2025-10-03T18:26:47.974840971Z" level=info msg="Image docker.io/kicbase/echo-server:functional-889240 not found" id=270d5a1b-231e-4439-9186-da7d5d315b42 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:47 functional-889240 crio[5881]: time="2025-10-03T18:26:47.974869843Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-889240 found" id=270d5a1b-231e-4439-9186-da7d5d315b42 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:48 functional-889240 crio[5881]: time="2025-10-03T18:26:48.003414023Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-889240" id=a8c0c298-5556-4984-b987-036c28793b00 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:48 functional-889240 crio[5881]: time="2025-10-03T18:26:48.003571735Z" level=info msg="Image localhost/kicbase/echo-server:functional-889240 not found" id=a8c0c298-5556-4984-b987-036c28793b00 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:26:48 functional-889240 crio[5881]: time="2025-10-03T18:26:48.00361765Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-889240 found" id=a8c0c298-5556-4984-b987-036c28793b00 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:26:48.544309   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:48.544800   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:48.546338   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:48.546713   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1003 18:26:48.548373   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 3 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001870] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.374530] i8042: Warning: Keylock active
	[  +0.010846] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003424] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000658] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000637] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000691] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.479345] block sda: the capability attribute has been deprecated.
	[  +0.086934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025583] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.992810] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:26:48 up  1:09,  0 user,  load average: 0.40, 0.13, 0.06
	Linux functional-889240 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 03 18:26:40 functional-889240 kubelet[15004]: E1003 18:26:40.952038   15004 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:26:40 functional-889240 kubelet[15004]:         container etcd start failed in pod etcd-functional-889240_kube-system(a73daf0147d5280c6db538ca59db9fe0): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:26:40 functional-889240 kubelet[15004]:  > logger="UnhandledError"
	Oct 03 18:26:40 functional-889240 kubelet[15004]: E1003 18:26:40.952069   15004 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-889240" podUID="a73daf0147d5280c6db538ca59db9fe0"
	Oct 03 18:26:43 functional-889240 kubelet[15004]: E1003 18:26:43.547345   15004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-889240?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 03 18:26:43 functional-889240 kubelet[15004]: I1003 18:26:43.698772   15004 kubelet_node_status.go:75] "Attempting to register node" node="functional-889240"
	Oct 03 18:26:43 functional-889240 kubelet[15004]: E1003 18:26:43.699160   15004 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-889240"
	Oct 03 18:26:45 functional-889240 kubelet[15004]: E1003 18:26:45.924695   15004 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-889240\" not found" node="functional-889240"
	Oct 03 18:26:45 functional-889240 kubelet[15004]: E1003 18:26:45.949864   15004 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:26:45 functional-889240 kubelet[15004]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:26:45 functional-889240 kubelet[15004]:  > podSandboxID="5afe648376bae0c19842f5a1c1151818b48e5023850d109e3400d8f2b4d7b310"
	Oct 03 18:26:45 functional-889240 kubelet[15004]: E1003 18:26:45.949968   15004 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:26:45 functional-889240 kubelet[15004]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-889240_kube-system(7e715cb6024854d45a9fa99576167e43): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:26:45 functional-889240 kubelet[15004]:  > logger="UnhandledError"
	Oct 03 18:26:45 functional-889240 kubelet[15004]: E1003 18:26:45.950027   15004 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-889240" podUID="7e715cb6024854d45a9fa99576167e43"
	Oct 03 18:26:47 functional-889240 kubelet[15004]: E1003 18:26:47.292830   15004 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-889240&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 03 18:26:47 functional-889240 kubelet[15004]: E1003 18:26:47.311203   15004 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-889240.186b0e42e698a181  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-889240,UID:functional-889240,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-889240 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-889240,},FirstTimestamp:2025-10-03 18:22:39.917703553 +0000 UTC m=+1.131431312,LastTimestamp:2025-10-03 18:22:39.917703553 +0000 UTC m=+1.131431312,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-889240,}"
	Oct 03 18:26:47 functional-889240 kubelet[15004]: E1003 18:26:47.924783   15004 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-889240\" not found" node="functional-889240"
	Oct 03 18:26:47 functional-889240 kubelet[15004]: E1003 18:26:47.967879   15004 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:26:47 functional-889240 kubelet[15004]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:26:47 functional-889240 kubelet[15004]:  > podSandboxID="cc37714218db619cb7a417ce510ab6d24921b06cab2510376343b7b5c57bba9a"
	Oct 03 18:26:47 functional-889240 kubelet[15004]: E1003 18:26:47.967997   15004 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:26:47 functional-889240 kubelet[15004]:         container kube-scheduler start failed in pod kube-scheduler-functional-889240_kube-system(7dadd1df42d6a2c3d1907f134f7d5ea7): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:26:47 functional-889240 kubelet[15004]:  > logger="UnhandledError"
	Oct 03 18:26:47 functional-889240 kubelet[15004]: E1003 18:26:47.968041   15004 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-889240" podUID="7dadd1df42d6a2c3d1907f134f7d5ea7"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-889240 -n functional-889240
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-889240 -n functional-889240: exit status 2 (345.077448ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-889240" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (2.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-889240 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-889240 create deployment hello-node --image kicbase/echo-server: exit status 1 (66.5739ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-889240 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-889240 service list: exit status 103 (296.393131ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-889240 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-889240"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-amd64 -p functional-889240 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-889240 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-889240\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-889240 service list -o json: exit status 103 (290.505513ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-889240 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-889240"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-amd64 -p functional-889240 service list -o json": exit status 103
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-889240 service --namespace=default --https --url hello-node: exit status 103 (292.422099ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-889240 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-889240"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-889240 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-889240 service hello-node --url --format={{.IP}}: exit status 103 (312.412561ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-889240 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-889240"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-889240 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-889240 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-889240\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 image load --daemon kicbase/echo-server:functional-889240 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-889240" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-889240 service hello-node --url: exit status 103 (312.42194ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-889240 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-889240"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-889240 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-889240 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-889240"
functional_test.go:1579: failed to parse "* The control-plane node functional-889240 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-889240\"": parse "* The control-plane node functional-889240 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-889240\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 image load --daemon kicbase/echo-server:functional-889240 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-889240" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-889240
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 image load --daemon kicbase/echo-server:functional-889240 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-889240" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (2.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-889240 /tmp/TestFunctionalparallelMountCmdany-port2363591403/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759516010263030140" to /tmp/TestFunctionalparallelMountCmdany-port2363591403/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759516010263030140" to /tmp/TestFunctionalparallelMountCmdany-port2363591403/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759516010263030140" to /tmp/TestFunctionalparallelMountCmdany-port2363591403/001/test-1759516010263030140
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-889240 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (330.42299ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1003 18:26:50.594455   12212 retry.go:31] will retry after 297.961951ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  3 18:26 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  3 18:26 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  3 18:26 test-1759516010263030140
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 ssh cat /mount-9p/test-1759516010263030140
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-889240 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-889240 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (56.552887ms)

                                                
                                                
** stderr ** 
	E1003 18:26:51.948940   57659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	error: unable to recognize "testdata/busybox-mount-test.yaml": Get "https://192.168.49.2:8441/api?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-889240 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctional/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-889240 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (323.688856ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=40809)
	total 2
	-rw-r--r-- 1 docker docker 24 Oct  3 18:26 created-by-test
	-rw-r--r-- 1 docker docker 24 Oct  3 18:26 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Oct  3 18:26 test-1759516010263030140
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-amd64 -p functional-889240 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-889240 /tmp/TestFunctionalparallelMountCmdany-port2363591403/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-889240 /tmp/TestFunctionalparallelMountCmdany-port2363591403/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalparallelMountCmdany-port2363591403/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:40809
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalparallelMountCmdany-port2363591403/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-889240 /tmp/TestFunctionalparallelMountCmdany-port2363591403/001:/mount-9p --alsologtostderr -v=1] stderr:
I1003 18:26:50.337124   55975 out.go:360] Setting OutFile to fd 1 ...
I1003 18:26:50.337467   55975 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 18:26:50.337479   55975 out.go:374] Setting ErrFile to fd 2...
I1003 18:26:50.337485   55975 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 18:26:50.337808   55975 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
I1003 18:26:50.338184   55975 mustload.go:65] Loading cluster: functional-889240
I1003 18:26:50.338566   55975 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1003 18:26:50.339239   55975 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
I1003 18:26:50.360712   55975 host.go:66] Checking if "functional-889240" exists ...
I1003 18:26:50.361043   55975 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1003 18:26:50.442483   55975 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-03 18:26:50.431176551 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1003 18:26:50.442663   55975 cli_runner.go:164] Run: docker network inspect functional-889240 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1003 18:26:50.471756   55975 out.go:179] * Mounting host path /tmp/TestFunctionalparallelMountCmdany-port2363591403/001 into VM as /mount-9p ...
I1003 18:26:50.473078   55975 out.go:179]   - Mount type:   9p
I1003 18:26:50.474259   55975 out.go:179]   - User ID:      docker
I1003 18:26:50.475596   55975 out.go:179]   - Group ID:     docker
I1003 18:26:50.476818   55975 out.go:179]   - Version:      9p2000.L
I1003 18:26:50.477792   55975 out.go:179]   - Message Size: 262144
I1003 18:26:50.479151   55975 out.go:179]   - Options:      map[]
I1003 18:26:50.480532   55975 out.go:179]   - Bind Address: 192.168.49.1:40809
I1003 18:26:50.481592   55975 out.go:179] * Userspace file server: 
I1003 18:26:50.481937   55975 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1003 18:26:50.482016   55975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
I1003 18:26:50.501255   55975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
I1003 18:26:50.610000   55975 mount.go:180] unmount for /mount-9p ran successfully
I1003 18:26:50.610023   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1003 18:26:50.619330   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=40809,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1003 18:26:50.666058   55975 main.go:125] stdlog: ufs.go:141 connected
I1003 18:26:50.666233   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tversion tag 65535 msize 262144 version '9P2000.L'
I1003 18:26:50.666291   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rversion tag 65535 msize 262144 version '9P2000'
I1003 18:26:50.666553   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1003 18:26:50.666630   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rattach tag 0 aqid (20fa074 ab53a716 'd')
I1003 18:26:50.666933   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tstat tag 0 fid 0
I1003 18:26:50.667111   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa074 ab53a716 'd') m d775 at 0 mt 1759516010 l 4096 t 0 d 0 ext )
I1003 18:26:50.668874   55975 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/.mount-process: {Name:mk916060b268070e9292615f6b8779ebd9f27dd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1003 18:26:50.669108   55975 mount.go:105] mount successful: ""
I1003 18:26:50.670758   55975 out.go:179] * Successfully mounted /tmp/TestFunctionalparallelMountCmdany-port2363591403/001 to /mount-9p
I1003 18:26:50.671804   55975 out.go:203] 
I1003 18:26:50.673861   55975 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1003 18:26:51.568419   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tstat tag 0 fid 0
I1003 18:26:51.568564   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa074 ab53a716 'd') m d775 at 0 mt 1759516010 l 4096 t 0 d 0 ext )
I1003 18:26:51.569011   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Twalk tag 0 fid 0 newfid 1 
I1003 18:26:51.569087   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rwalk tag 0 
I1003 18:26:51.569214   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Topen tag 0 fid 1 mode 0
I1003 18:26:51.569281   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Ropen tag 0 qid (20fa074 ab53a716 'd') iounit 0
I1003 18:26:51.569381   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tstat tag 0 fid 0
I1003 18:26:51.569497   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa074 ab53a716 'd') m d775 at 0 mt 1759516010 l 4096 t 0 d 0 ext )
I1003 18:26:51.569781   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tread tag 0 fid 1 offset 0 count 262120
I1003 18:26:51.570034   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rread tag 0 count 258
I1003 18:26:51.570186   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tread tag 0 fid 1 offset 258 count 261862
I1003 18:26:51.570221   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rread tag 0 count 0
I1003 18:26:51.570356   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tread tag 0 fid 1 offset 258 count 262120
I1003 18:26:51.570400   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rread tag 0 count 0
I1003 18:26:51.570549   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Twalk tag 0 fid 0 newfid 2 0:'test-1759516010263030140' 
I1003 18:26:51.570617   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rwalk tag 0 (20fa077 ab53a716 '') 
I1003 18:26:51.570762   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tstat tag 0 fid 2
I1003 18:26:51.570872   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rstat tag 0 st ('test-1759516010263030140' 'jenkins' 'balintp' '' q (20fa077 ab53a716 '') m 644 at 0 mt 1759516010 l 24 t 0 d 0 ext )
I1003 18:26:51.571053   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tstat tag 0 fid 2
I1003 18:26:51.571170   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rstat tag 0 st ('test-1759516010263030140' 'jenkins' 'balintp' '' q (20fa077 ab53a716 '') m 644 at 0 mt 1759516010 l 24 t 0 d 0 ext )
I1003 18:26:51.571310   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tclunk tag 0 fid 2
I1003 18:26:51.571362   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rclunk tag 0
I1003 18:26:51.571523   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1003 18:26:51.571568   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rwalk tag 0 (20fa076 ab53a716 '') 
I1003 18:26:51.571656   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tstat tag 0 fid 2
I1003 18:26:51.571737   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa076 ab53a716 '') m 644 at 0 mt 1759516010 l 24 t 0 d 0 ext )
I1003 18:26:51.571841   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tstat tag 0 fid 2
I1003 18:26:51.571933   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa076 ab53a716 '') m 644 at 0 mt 1759516010 l 24 t 0 d 0 ext )
I1003 18:26:51.572051   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tclunk tag 0 fid 2
I1003 18:26:51.572084   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rclunk tag 0
I1003 18:26:51.572200   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1003 18:26:51.572240   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rwalk tag 0 (20fa075 ab53a716 '') 
I1003 18:26:51.572314   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tstat tag 0 fid 2
I1003 18:26:51.572427   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa075 ab53a716 '') m 644 at 0 mt 1759516010 l 24 t 0 d 0 ext )
I1003 18:26:51.572544   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tstat tag 0 fid 2
I1003 18:26:51.572639   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa075 ab53a716 '') m 644 at 0 mt 1759516010 l 24 t 0 d 0 ext )
I1003 18:26:51.572743   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tclunk tag 0 fid 2
I1003 18:26:51.572779   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rclunk tag 0
I1003 18:26:51.572915   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tread tag 0 fid 1 offset 258 count 262120
I1003 18:26:51.572950   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rread tag 0 count 0
I1003 18:26:51.573067   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tclunk tag 0 fid 1
I1003 18:26:51.573113   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rclunk tag 0
I1003 18:26:51.886674   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Twalk tag 0 fid 0 newfid 1 0:'test-1759516010263030140' 
I1003 18:26:51.886729   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rwalk tag 0 (20fa077 ab53a716 '') 
I1003 18:26:51.886866   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tstat tag 0 fid 1
I1003 18:26:51.886986   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rstat tag 0 st ('test-1759516010263030140' 'jenkins' 'balintp' '' q (20fa077 ab53a716 '') m 644 at 0 mt 1759516010 l 24 t 0 d 0 ext )
I1003 18:26:51.887128   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Twalk tag 0 fid 1 newfid 2 
I1003 18:26:51.887164   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rwalk tag 0 
I1003 18:26:51.887264   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Topen tag 0 fid 2 mode 0
I1003 18:26:51.887322   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Ropen tag 0 qid (20fa077 ab53a716 '') iounit 0
I1003 18:26:51.887423   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tstat tag 0 fid 1
I1003 18:26:51.887506   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rstat tag 0 st ('test-1759516010263030140' 'jenkins' 'balintp' '' q (20fa077 ab53a716 '') m 644 at 0 mt 1759516010 l 24 t 0 d 0 ext )
I1003 18:26:51.887753   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tread tag 0 fid 2 offset 0 count 24
I1003 18:26:51.887810   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rread tag 0 count 24
I1003 18:26:51.888002   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tclunk tag 0 fid 2
I1003 18:26:51.888043   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rclunk tag 0
I1003 18:26:51.888146   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tclunk tag 0 fid 1
I1003 18:26:51.888184   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rclunk tag 0
I1003 18:26:52.260775   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tstat tag 0 fid 0
I1003 18:26:52.260895   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa074 ab53a716 'd') m d775 at 0 mt 1759516010 l 4096 t 0 d 0 ext )
I1003 18:26:52.261302   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Twalk tag 0 fid 0 newfid 1 
I1003 18:26:52.261423   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rwalk tag 0 
I1003 18:26:52.261640   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Topen tag 0 fid 1 mode 0
I1003 18:26:52.261693   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Ropen tag 0 qid (20fa074 ab53a716 'd') iounit 0
I1003 18:26:52.261863   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tstat tag 0 fid 0
I1003 18:26:52.262016   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa074 ab53a716 'd') m d775 at 0 mt 1759516010 l 4096 t 0 d 0 ext )
I1003 18:26:52.262267   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tread tag 0 fid 1 offset 0 count 262120
I1003 18:26:52.262422   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rread tag 0 count 258
I1003 18:26:52.262611   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tread tag 0 fid 1 offset 258 count 261862
I1003 18:26:52.262686   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rread tag 0 count 0
I1003 18:26:52.262859   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tread tag 0 fid 1 offset 258 count 262120
I1003 18:26:52.262905   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rread tag 0 count 0
I1003 18:26:52.263067   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Twalk tag 0 fid 0 newfid 2 0:'test-1759516010263030140' 
I1003 18:26:52.263116   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rwalk tag 0 (20fa077 ab53a716 '') 
I1003 18:26:52.263297   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tstat tag 0 fid 2
I1003 18:26:52.263423   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rstat tag 0 st ('test-1759516010263030140' 'jenkins' 'balintp' '' q (20fa077 ab53a716 '') m 644 at 0 mt 1759516010 l 24 t 0 d 0 ext )
I1003 18:26:52.263593   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tstat tag 0 fid 2
I1003 18:26:52.263705   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rstat tag 0 st ('test-1759516010263030140' 'jenkins' 'balintp' '' q (20fa077 ab53a716 '') m 644 at 0 mt 1759516010 l 24 t 0 d 0 ext )
I1003 18:26:52.263860   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tclunk tag 0 fid 2
I1003 18:26:52.263892   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rclunk tag 0
I1003 18:26:52.264091   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1003 18:26:52.264151   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rwalk tag 0 (20fa076 ab53a716 '') 
I1003 18:26:52.264301   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tstat tag 0 fid 2
I1003 18:26:52.264407   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa076 ab53a716 '') m 644 at 0 mt 1759516010 l 24 t 0 d 0 ext )
I1003 18:26:52.264568   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tstat tag 0 fid 2
I1003 18:26:52.264661   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa076 ab53a716 '') m 644 at 0 mt 1759516010 l 24 t 0 d 0 ext )
I1003 18:26:52.264835   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tclunk tag 0 fid 2
I1003 18:26:52.264863   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rclunk tag 0
I1003 18:26:52.265031   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1003 18:26:52.265086   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rwalk tag 0 (20fa075 ab53a716 '') 
I1003 18:26:52.265306   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tstat tag 0 fid 2
I1003 18:26:52.265446   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa075 ab53a716 '') m 644 at 0 mt 1759516010 l 24 t 0 d 0 ext )
I1003 18:26:52.265642   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tstat tag 0 fid 2
I1003 18:26:52.265760   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa075 ab53a716 '') m 644 at 0 mt 1759516010 l 24 t 0 d 0 ext )
I1003 18:26:52.265921   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tclunk tag 0 fid 2
I1003 18:26:52.265989   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rclunk tag 0
I1003 18:26:52.266199   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tread tag 0 fid 1 offset 258 count 262120
I1003 18:26:52.266325   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rread tag 0 count 0
I1003 18:26:52.266546   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tclunk tag 0 fid 1
I1003 18:26:52.266600   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rclunk tag 0
I1003 18:26:52.268051   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1003 18:26:52.268110   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rerror tag 0 ename 'file not found' ecode 0
I1003 18:26:52.628012   55975 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53986 Tclunk tag 0 fid 0
I1003 18:26:52.628058   55975 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53986 Rclunk tag 0
I1003 18:26:52.628754   55975 main.go:125] stdlog: ufs.go:147 disconnected
I1003 18:26:52.648821   55975 out.go:179] * Unmounting /mount-9p ...
I1003 18:26:52.650048   55975 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1003 18:26:52.657048   55975 mount.go:180] unmount for /mount-9p ran successfully
I1003 18:26:52.657140   55975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/.mount-process: {Name:mk916060b268070e9292615f6b8779ebd9f27dd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1003 18:26:52.658603   55975 out.go:203] 
W1003 18:26:52.660482   55975 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1003 18:26:52.661620   55975 out.go:203] 
--- FAIL: TestFunctional/parallel/MountCmd/any-port (2.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 image save kicbase/echo-server:functional-889240 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1003 18:26:51.696744   57399 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:26:51.697106   57399 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:26:51.697120   57399 out.go:374] Setting ErrFile to fd 2...
	I1003 18:26:51.697127   57399 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:26:51.697363   57399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:26:51.697947   57399 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:26:51.698120   57399 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:26:51.698550   57399 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
	I1003 18:26:51.721625   57399 ssh_runner.go:195] Run: systemctl --version
	I1003 18:26:51.721687   57399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
	I1003 18:26:51.742713   57399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
	I1003 18:26:51.845530   57399 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1003 18:26:51.845591   57399 cache_images.go:254] Failed to load cached images for "functional-889240": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1003 18:26:51.845614   57399 cache_images.go:266] failed pushing to: functional-889240

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-889240
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 image save --daemon kicbase/echo-server:functional-889240 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-889240
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-889240: exit status 1 (18.402084ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-889240

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-889240

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-889240 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-889240 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1003 18:26:52.725620   58186 out.go:360] Setting OutFile to fd 1 ...
I1003 18:26:52.725712   58186 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 18:26:52.725723   58186 out.go:374] Setting ErrFile to fd 2...
I1003 18:26:52.725730   58186 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 18:26:52.726031   58186 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
I1003 18:26:52.726282   58186 mustload.go:65] Loading cluster: functional-889240
I1003 18:26:52.726728   58186 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1003 18:26:52.727316   58186 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
I1003 18:26:52.746944   58186 host.go:66] Checking if "functional-889240" exists ...
I1003 18:26:52.747304   58186 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1003 18:26:52.822426   58186 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-03 18:26:52.809700963 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1003 18:26:52.822528   58186 api_server.go:166] Checking apiserver status ...
I1003 18:26:52.822574   58186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1003 18:26:52.822614   58186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
I1003 18:26:52.842630   58186 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
W1003 18:26:52.954330   58186 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1003 18:26:52.956451   58186 out.go:179] * The control-plane node functional-889240 apiserver is not running: (state=Stopped)
I1003 18:26:52.958098   58186 out.go:179]   To start a cluster, run: "minikube start -p functional-889240"

                                                
                                                
stdout: * The control-plane node functional-889240 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-889240"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-889240 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-889240 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-889240 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-889240 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-889240 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-889240 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-889240 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-889240 apply -f testdata/testsvc.yaml: exit status 1 (68.235963ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-889240 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (94.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I1003 18:26:53.036628   12212 retry.go:31] will retry after 1.7254578s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-889240 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-889240 get svc nginx-svc: exit status 1 (48.344009ms)

                                                
                                                
** stderr ** 
	E1003 18:28:27.808773   63144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:28:27.809109   63144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:28:27.810518   63144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:28:27.810785   63144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1003 18:28:27.812206   63144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-889240 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (94.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (500.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1003 18:31:51.839437   12212 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:31:51.845874   12212 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:31:51.857243   12212 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:31:51.878673   12212 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:31:51.920052   12212 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:31:52.001460   12212 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:31:52.162947   12212 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:31:52.484639   12212 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:31:53.126680   12212 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:31:54.408282   12212 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:31:56.971193   12212 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:32:02.092918   12212 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:32:12.334606   12212 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:32:32.816275   12212 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:33:13.778950   12212 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:34:35.703204   12212 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:36:51.838585   12212 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:37:19.544766   12212 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: exit status 80 (8m19.441897701s)

                                                
                                                
-- stdout --
	* [ha-422561] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21625
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "ha-422561" primary control-plane node in "ha-422561" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:30:55.351405   64909 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:30:55.351662   64909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:55.351671   64909 out.go:374] Setting ErrFile to fd 2...
	I1003 18:30:55.351675   64909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:55.351854   64909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:30:55.352339   64909 out.go:368] Setting JSON to false
	I1003 18:30:55.353203   64909 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4406,"bootTime":1759511849,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:30:55.353289   64909 start.go:140] virtualization: kvm guest
	I1003 18:30:55.355458   64909 out.go:179] * [ha-422561] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 18:30:55.356815   64909 notify.go:220] Checking for updates...
	I1003 18:30:55.356884   64909 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:30:55.358389   64909 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:30:55.359964   64909 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:30:55.361351   64909 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:30:55.362647   64909 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:30:55.363956   64909 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:30:55.365351   64909 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:30:55.387768   64909 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:30:55.387885   64909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:30:55.443407   64909 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:30:55.433728571 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:30:55.443516   64909 docker.go:318] overlay module found
	I1003 18:30:55.445440   64909 out.go:179] * Using the docker driver based on user configuration
	I1003 18:30:55.446777   64909 start.go:304] selected driver: docker
	I1003 18:30:55.446793   64909 start.go:924] validating driver "docker" against <nil>
	I1003 18:30:55.446808   64909 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:30:55.447403   64909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:30:55.498777   64909 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:30:55.489521827 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:30:55.498958   64909 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1003 18:30:55.499206   64909 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:30:55.501187   64909 out.go:179] * Using Docker driver with root privileges
	I1003 18:30:55.502312   64909 cni.go:84] Creating CNI manager for ""
	I1003 18:30:55.502386   64909 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1003 18:30:55.502397   64909 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 18:30:55.502459   64909 start.go:348] cluster config:
	{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1003 18:30:55.503779   64909 out.go:179] * Starting "ha-422561" primary control-plane node in "ha-422561" cluster
	I1003 18:30:55.504816   64909 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:30:55.506028   64909 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:30:55.507131   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:30:55.507167   64909 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 18:30:55.507169   64909 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:30:55.507175   64909 cache.go:58] Caching tarball of preloaded images
	I1003 18:30:55.507294   64909 preload.go:233] Found /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1003 18:30:55.507311   64909 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 18:30:55.507736   64909 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:30:55.507764   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json: {Name:mk1ece959bac74a473416f0dfc8af04a6136d7b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:30:55.527458   64909 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 18:30:55.527478   64909 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 18:30:55.527494   64909 cache.go:232] Successfully downloaded all kic artifacts
	I1003 18:30:55.527527   64909 start.go:360] acquireMachinesLock for ha-422561: {Name:mk32fd04a5d9b5f89831583bab7d7527f4d187a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:30:55.527631   64909 start.go:364] duration metric: took 81.336µs to acquireMachinesLock for "ha-422561"
	I1003 18:30:55.527657   64909 start.go:93] Provisioning new machine with config: &{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 18:30:55.527748   64909 start.go:125] createHost starting for "" (driver="docker")
	I1003 18:30:55.529663   64909 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1003 18:30:55.529898   64909 start.go:159] libmachine.API.Create for "ha-422561" (driver="docker")
	I1003 18:30:55.529933   64909 client.go:168] LocalClient.Create starting
	I1003 18:30:55.530028   64909 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem
	I1003 18:30:55.530072   64909 main.go:141] libmachine: Decoding PEM data...
	I1003 18:30:55.530097   64909 main.go:141] libmachine: Parsing certificate...
	I1003 18:30:55.530187   64909 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem
	I1003 18:30:55.530226   64909 main.go:141] libmachine: Decoding PEM data...
	I1003 18:30:55.530238   64909 main.go:141] libmachine: Parsing certificate...
	I1003 18:30:55.530612   64909 cli_runner.go:164] Run: docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 18:30:55.547068   64909 cli_runner.go:211] docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 18:30:55.547129   64909 network_create.go:284] running [docker network inspect ha-422561] to gather additional debugging logs...
	I1003 18:30:55.547146   64909 cli_runner.go:164] Run: docker network inspect ha-422561
	W1003 18:30:55.563141   64909 cli_runner.go:211] docker network inspect ha-422561 returned with exit code 1
	I1003 18:30:55.563167   64909 network_create.go:287] error running [docker network inspect ha-422561]: docker network inspect ha-422561: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-422561 not found
	I1003 18:30:55.563179   64909 network_create.go:289] output of [docker network inspect ha-422561]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-422561 not found
	
	** /stderr **
	I1003 18:30:55.563276   64909 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:30:55.579301   64909 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00157b3a0}
	I1003 18:30:55.579336   64909 network_create.go:124] attempt to create docker network ha-422561 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1003 18:30:55.579388   64909 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-422561 ha-422561
	I1003 18:30:55.634233   64909 network_create.go:108] docker network ha-422561 192.168.49.0/24 created
	I1003 18:30:55.634260   64909 kic.go:121] calculated static IP "192.168.49.2" for the "ha-422561" container
	I1003 18:30:55.634318   64909 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 18:30:55.649960   64909 cli_runner.go:164] Run: docker volume create ha-422561 --label name.minikube.sigs.k8s.io=ha-422561 --label created_by.minikube.sigs.k8s.io=true
	I1003 18:30:55.667186   64909 oci.go:103] Successfully created a docker volume ha-422561
	I1003 18:30:55.667250   64909 cli_runner.go:164] Run: docker run --rm --name ha-422561-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-422561 --entrypoint /usr/bin/test -v ha-422561:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1003 18:30:56.041615   64909 oci.go:107] Successfully prepared a docker volume ha-422561
	I1003 18:30:56.041648   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:30:56.041669   64909 kic.go:194] Starting extracting preloaded images to volume ...
	I1003 18:30:56.041727   64909 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-422561:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1003 18:31:00.326417   64909 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-422561:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.284654466s)
	I1003 18:31:00.326457   64909 kic.go:203] duration metric: took 4.284784967s to extract preloaded images to volume ...
	W1003 18:31:00.326567   64909 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1003 18:31:00.326610   64909 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1003 18:31:00.326657   64909 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1003 18:31:00.381592   64909 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-422561 --name ha-422561 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-422561 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-422561 --network ha-422561 --ip 192.168.49.2 --volume ha-422561:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1003 18:31:00.641348   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Running}}
	I1003 18:31:00.659876   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:00.678319   64909 cli_runner.go:164] Run: docker exec ha-422561 stat /var/lib/dpkg/alternatives/iptables
	I1003 18:31:00.728414   64909 oci.go:144] the created container "ha-422561" has a running status.
	I1003 18:31:00.728450   64909 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa...
	I1003 18:31:01.103610   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1003 18:31:01.103663   64909 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1003 18:31:01.128670   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:01.147200   64909 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1003 18:31:01.147218   64909 kic_runner.go:114] Args: [docker exec --privileged ha-422561 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1003 18:31:01.189023   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:01.207395   64909 machine.go:93] provisionDockerMachine start ...
	I1003 18:31:01.207497   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.226029   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.226282   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.226299   64909 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 18:31:01.372245   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:31:01.372275   64909 ubuntu.go:182] provisioning hostname "ha-422561"
	I1003 18:31:01.372335   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.390674   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.390889   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.390902   64909 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-422561 && echo "ha-422561" | sudo tee /etc/hostname
	I1003 18:31:01.544850   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:31:01.544932   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.563695   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.563966   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.564014   64909 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422561' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422561/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422561' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:31:01.708942   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:31:01.708971   64909 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8669/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8669/.minikube}
	I1003 18:31:01.709036   64909 ubuntu.go:190] setting up certificates
	I1003 18:31:01.709048   64909 provision.go:84] configureAuth start
	I1003 18:31:01.709101   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:01.727778   64909 provision.go:143] copyHostCerts
	I1003 18:31:01.727814   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:31:01.727849   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem, removing ...
	I1003 18:31:01.727858   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:31:01.727940   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem (1082 bytes)
	I1003 18:31:01.728054   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:31:01.728079   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem, removing ...
	I1003 18:31:01.728090   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:31:01.728137   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem (1123 bytes)
	I1003 18:31:01.728200   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:31:01.728225   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem, removing ...
	I1003 18:31:01.728234   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:31:01.728266   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem (1675 bytes)
	I1003 18:31:01.728336   64909 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem org=jenkins.ha-422561 san=[127.0.0.1 192.168.49.2 ha-422561 localhost minikube]
	I1003 18:31:01.864219   64909 provision.go:177] copyRemoteCerts
	I1003 18:31:01.864281   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:31:01.864317   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.882069   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:01.982800   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 18:31:01.982877   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1003 18:31:02.000887   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 18:31:02.000952   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 18:31:02.017591   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 18:31:02.017639   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 18:31:02.034172   64909 provision.go:87] duration metric: took 325.10989ms to configureAuth
	I1003 18:31:02.034202   64909 ubuntu.go:206] setting minikube options for container-runtime
	I1003 18:31:02.034393   64909 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:31:02.034508   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.052111   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:02.052326   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:02.052344   64909 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 18:31:02.295594   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 18:31:02.295629   64909 machine.go:96] duration metric: took 1.088207423s to provisionDockerMachine
	I1003 18:31:02.295640   64909 client.go:171] duration metric: took 6.765697238s to LocalClient.Create
	I1003 18:31:02.295660   64909 start.go:167] duration metric: took 6.765761646s to libmachine.API.Create "ha-422561"
	I1003 18:31:02.295669   64909 start.go:293] postStartSetup for "ha-422561" (driver="docker")
	I1003 18:31:02.295682   64909 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:31:02.295752   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:31:02.295789   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.312783   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.414720   64909 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:31:02.418127   64909 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:31:02.418149   64909 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 18:31:02.418159   64909 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/addons for local assets ...
	I1003 18:31:02.418213   64909 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/files for local assets ...
	I1003 18:31:02.418310   64909 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> 122122.pem in /etc/ssl/certs
	I1003 18:31:02.418326   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /etc/ssl/certs/122122.pem
	I1003 18:31:02.418453   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 18:31:02.425623   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:31:02.444405   64909 start.go:296] duration metric: took 148.722871ms for postStartSetup
	I1003 18:31:02.444748   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:02.462226   64909 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:31:02.462456   64909 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:31:02.462495   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.478737   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.575846   64909 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:31:02.580138   64909 start.go:128] duration metric: took 7.052376255s to createHost
	I1003 18:31:02.580160   64909 start.go:83] releasing machines lock for "ha-422561", held for 7.052515614s
	I1003 18:31:02.580230   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:02.596730   64909 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:31:02.596776   64909 ssh_runner.go:195] Run: cat /version.json
	I1003 18:31:02.596798   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.596817   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.613783   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.614183   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.764865   64909 ssh_runner.go:195] Run: systemctl --version
	I1003 18:31:02.771251   64909 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 18:31:02.803643   64909 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 18:31:02.807949   64909 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 18:31:02.808044   64909 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 18:31:02.833024   64909 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 18:31:02.833043   64909 start.go:495] detecting cgroup driver to use...
	I1003 18:31:02.833073   64909 detect.go:190] detected "systemd" cgroup driver on host os
	I1003 18:31:02.833108   64909 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 18:31:02.847613   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 18:31:02.858865   64909 docker.go:218] disabling cri-docker service (if available) ...
	I1003 18:31:02.858910   64909 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 18:31:02.874470   64909 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 18:31:02.890554   64909 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 18:31:02.970342   64909 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 18:31:03.055310   64909 docker.go:234] disabling docker service ...
	I1003 18:31:03.055369   64909 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 18:31:03.072668   64909 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 18:31:03.084308   64909 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 18:31:03.163959   64909 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 18:31:03.241930   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 18:31:03.253863   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:31:03.266905   64909 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 18:31:03.266971   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.276795   64909 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 18:31:03.276848   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.285157   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.293117   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.301070   64909 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:31:03.308489   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.316789   64909 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.329424   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.337651   64909 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:31:03.344839   64909 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:31:03.352026   64909 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:31:03.430894   64909 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 18:31:03.533915   64909 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 18:31:03.534002   64909 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 18:31:03.537783   64909 start.go:563] Will wait 60s for crictl version
	I1003 18:31:03.537838   64909 ssh_runner.go:195] Run: which crictl
	I1003 18:31:03.541393   64909 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 18:31:03.564883   64909 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 18:31:03.564963   64909 ssh_runner.go:195] Run: crio --version
	I1003 18:31:03.591363   64909 ssh_runner.go:195] Run: crio --version
	I1003 18:31:03.619425   64909 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 18:31:03.620466   64909 cli_runner.go:164] Run: docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:31:03.637151   64909 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 18:31:03.641184   64909 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:31:03.651292   64909 kubeadm.go:883] updating cluster {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 18:31:03.651379   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:31:03.651428   64909 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:31:03.680883   64909 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:31:03.680904   64909 crio.go:433] Images already preloaded, skipping extraction
	I1003 18:31:03.680955   64909 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:31:03.706829   64909 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:31:03.706859   64909 cache_images.go:85] Images are preloaded, skipping loading
	I1003 18:31:03.706866   64909 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1003 18:31:03.706953   64909 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422561 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 18:31:03.707032   64909 ssh_runner.go:195] Run: crio config
	I1003 18:31:03.751501   64909 cni.go:84] Creating CNI manager for ""
	I1003 18:31:03.751523   64909 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 18:31:03.751538   64909 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 18:31:03.751558   64909 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-422561 NodeName:ha-422561 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 18:31:03.751669   64909 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-422561"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:31:03.751691   64909 kube-vip.go:115] generating kube-vip config ...
	I1003 18:31:03.751728   64909 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1003 18:31:03.763009   64909 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:31:03.763125   64909 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1003 18:31:03.763181   64909 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 18:31:03.770585   64909 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:31:03.770633   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1003 18:31:03.778069   64909 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1003 18:31:03.790397   64909 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 18:31:03.805112   64909 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1003 18:31:03.817362   64909 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1003 18:31:03.830824   64909 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1003 18:31:03.834300   64909 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:31:03.843861   64909 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:31:03.921407   64909 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:31:03.944431   64909 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561 for IP: 192.168.49.2
	I1003 18:31:03.944451   64909 certs.go:195] generating shared ca certs ...
	I1003 18:31:03.944468   64909 certs.go:227] acquiring lock for ca certs: {Name:mk92d1e8e469cb44d9924ff8abf5ecf0a8ce4e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:03.944607   64909 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key
	I1003 18:31:03.944644   64909 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key
	I1003 18:31:03.944652   64909 certs.go:257] generating profile certs ...
	I1003 18:31:03.944708   64909 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key
	I1003 18:31:03.944722   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt with IP's: []
	I1003 18:31:04.171087   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt ...
	I1003 18:31:04.171118   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt: {Name:mked6cb0f731cbb630d2b187c4975015a458a284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.171291   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key ...
	I1003 18:31:04.171301   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key: {Name:mk0c9f0a0941d99f2af213cd316467f053532c99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.171391   64909 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905
	I1003 18:31:04.171406   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1003 18:31:04.383185   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 ...
	I1003 18:31:04.383218   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905: {Name:mkc24c55d4abb428b3559a93e6e301be2cab703a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.383381   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905 ...
	I1003 18:31:04.383394   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905: {Name:mk0576a73623089a3eecf4e34bbbd214545e2247 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.383486   64909 certs.go:382] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt
	I1003 18:31:04.383601   64909 certs.go:386] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905 -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key
	I1003 18:31:04.383674   64909 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key
	I1003 18:31:04.383689   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt with IP's: []
	I1003 18:31:04.628083   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt ...
	I1003 18:31:04.628112   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt: {Name:mkc19179c67a2559968759165df93d304eb42db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.628269   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key ...
	I1003 18:31:04.628279   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key: {Name:mka8b2392a3d721a70329b852837f3403643f948 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.628347   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 18:31:04.628364   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 18:31:04.628375   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 18:31:04.628384   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 18:31:04.628397   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 18:31:04.628410   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 18:31:04.628430   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 18:31:04.628442   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 18:31:04.628492   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem (1338 bytes)
	W1003 18:31:04.628525   64909 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212_empty.pem, impossibly tiny 0 bytes
	I1003 18:31:04.628535   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:31:04.628558   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:31:04.628580   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:31:04.628601   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem (1675 bytes)
	I1003 18:31:04.628637   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:31:04.628666   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem -> /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.628680   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.628692   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.629254   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:31:04.646879   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 18:31:04.663465   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:31:04.679837   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:31:04.695959   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 18:31:04.712689   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:31:04.729310   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:31:04.745587   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 18:31:04.761663   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem --> /usr/share/ca-certificates/12212.pem (1338 bytes)
	I1003 18:31:04.779546   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /usr/share/ca-certificates/122122.pem (1708 bytes)
	I1003 18:31:04.796119   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:31:04.813748   64909 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:31:04.826629   64909 ssh_runner.go:195] Run: openssl version
	I1003 18:31:04.832848   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122122.pem && ln -fs /usr/share/ca-certificates/122122.pem /etc/ssl/certs/122122.pem"
	I1003 18:31:04.840960   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.844465   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.844506   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.878276   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122122.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:31:04.886714   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:31:04.894672   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.898099   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.898154   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.931606   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:31:04.940357   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12212.pem && ln -fs /usr/share/ca-certificates/12212.pem /etc/ssl/certs/12212.pem"
	I1003 18:31:04.948454   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.952097   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.952148   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.985741   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12212.pem /etc/ssl/certs/51391683.0"
	I1003 18:31:04.994005   64909 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:31:04.997322   64909 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 18:31:04.997379   64909 kubeadm.go:400] StartCluster: {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:31:04.997476   64909 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:31:04.997539   64909 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:31:05.022530   64909 cri.go:89] found id: ""
	I1003 18:31:05.022595   64909 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:31:05.030329   64909 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 18:31:05.037782   64909 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:31:05.037841   64909 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:31:05.045127   64909 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:31:05.045142   64909 kubeadm.go:157] found existing configuration files:
	
	I1003 18:31:05.045174   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 18:31:05.052235   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:31:05.052286   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:31:05.059062   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 18:31:05.066034   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:31:05.066081   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:31:05.072912   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 18:31:05.079906   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:31:05.079966   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:31:05.086575   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 18:31:05.093500   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:31:05.093559   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:31:05.100246   64909 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:31:05.136174   64909 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:31:05.136254   64909 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:31:05.156320   64909 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:31:05.156407   64909 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 18:31:05.156462   64909 kubeadm.go:318] OS: Linux
	I1003 18:31:05.156539   64909 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:31:05.156610   64909 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:31:05.156705   64909 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:31:05.156790   64909 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:31:05.156865   64909 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:31:05.156939   64909 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:31:05.157035   64909 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:31:05.157127   64909 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 18:31:05.210250   64909 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:31:05.210408   64909 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:31:05.210566   64909 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:31:05.217643   64909 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:31:05.219725   64909 out.go:252]   - Generating certificates and keys ...
	I1003 18:31:05.219828   64909 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:31:05.219943   64909 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:31:05.398135   64909 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 18:31:05.511875   64909 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1003 18:31:05.863575   64909 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1003 18:31:06.044823   64909 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1003 18:31:06.083505   64909 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1003 18:31:06.083616   64909 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 18:31:06.181464   64909 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1003 18:31:06.181591   64909 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 18:31:06.345813   64909 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 18:31:06.565989   64909 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 18:31:06.759809   64909 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1003 18:31:06.759892   64909 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:31:06.883072   64909 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:31:07.211268   64909 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:31:07.403076   64909 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:31:07.687412   64909 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:31:08.052476   64909 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:31:08.052957   64909 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:31:08.054984   64909 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:31:08.056889   64909 out.go:252]   - Booting up control plane ...
	I1003 18:31:08.056984   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:31:08.057047   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:31:08.057102   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:31:08.069846   64909 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:31:08.069954   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:31:08.077490   64909 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:31:08.077826   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:31:08.077870   64909 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:31:08.170750   64909 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:31:08.170893   64909 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:31:09.172507   64909 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001794723s
	I1003 18:31:09.175233   64909 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:31:09.175335   64909 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1003 18:31:09.175418   64909 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:31:09.175496   64909 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:35:09.177158   64909 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001064557s
	I1003 18:35:09.177466   64909 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001283425s
	I1003 18:35:09.177673   64909 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00125879s
	I1003 18:35:09.177731   64909 kubeadm.go:318] 
	I1003 18:35:09.177887   64909 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 18:35:09.178114   64909 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:35:09.178320   64909 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 18:35:09.178580   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 18:35:09.178818   64909 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 18:35:09.179017   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 18:35:09.179033   64909 kubeadm.go:318] 
	I1003 18:35:09.182028   64909 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 18:35:09.182304   64909 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:35:09.182918   64909 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1003 18:35:09.183015   64909 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1003 18:35:09.183174   64909 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001794723s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001064557s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001283425s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00125879s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001794723s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001064557s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001283425s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00125879s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1003 18:35:09.183243   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1003 18:35:11.953646   64909 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.770379999s)
	I1003 18:35:11.953721   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:35:11.965876   64909 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:35:11.965928   64909 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:35:11.973363   64909 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:35:11.973382   64909 kubeadm.go:157] found existing configuration files:
	
	I1003 18:35:11.973419   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 18:35:11.980752   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:35:11.980806   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:35:11.987857   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 18:35:11.995081   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:35:11.995127   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:35:12.001778   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 18:35:12.009063   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:35:12.009126   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:35:12.015927   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 18:35:12.022875   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:35:12.022943   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:35:12.029549   64909 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:35:12.082477   64909 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 18:35:12.138594   64909 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:39:14.312592   64909 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	I1003 18:39:14.312818   64909 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1003 18:39:14.315914   64909 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:39:14.315992   64909 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:39:14.316115   64909 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:39:14.316166   64909 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 18:39:14.316250   64909 kubeadm.go:318] OS: Linux
	I1003 18:39:14.316328   64909 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:39:14.316401   64909 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:39:14.316475   64909 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:39:14.316553   64909 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:39:14.316624   64909 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:39:14.316701   64909 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:39:14.316751   64909 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:39:14.316825   64909 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 18:39:14.316936   64909 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:39:14.317123   64909 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:39:14.317262   64909 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:39:14.317314   64909 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:39:14.319872   64909 out.go:252]   - Generating certificates and keys ...
	I1003 18:39:14.319940   64909 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:39:14.320033   64909 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:39:14.320122   64909 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1003 18:39:14.320186   64909 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1003 18:39:14.320253   64909 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1003 18:39:14.320299   64909 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1003 18:39:14.320350   64909 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1003 18:39:14.320420   64909 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1003 18:39:14.320509   64909 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1003 18:39:14.320604   64909 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1003 18:39:14.320671   64909 kubeadm.go:318] [certs] Using the existing "sa" key
	I1003 18:39:14.320751   64909 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:39:14.320828   64909 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:39:14.320904   64909 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:39:14.321006   64909 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:39:14.321096   64909 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:39:14.321174   64909 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:39:14.321279   64909 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:39:14.321373   64909 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:39:14.322793   64909 out.go:252]   - Booting up control plane ...
	I1003 18:39:14.322884   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:39:14.323004   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:39:14.323072   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:39:14.323162   64909 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:39:14.323237   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:39:14.323335   64909 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:39:14.323415   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:39:14.323456   64909 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:39:14.323557   64909 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:39:14.323652   64909 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:39:14.323702   64909 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001540709s
	I1003 18:39:14.323792   64909 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:39:14.323860   64909 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1003 18:39:14.323946   64909 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:39:14.324043   64909 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:39:14.324124   64909 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	I1003 18:39:14.324186   64909 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	I1003 18:39:14.324248   64909 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	I1003 18:39:14.324258   64909 kubeadm.go:318] 
	I1003 18:39:14.324352   64909 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 18:39:14.324439   64909 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:39:14.324519   64909 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 18:39:14.324595   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 18:39:14.324687   64909 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 18:39:14.324773   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 18:39:14.324799   64909 kubeadm.go:318] 
	I1003 18:39:14.324836   64909 kubeadm.go:402] duration metric: took 8m9.327461574s to StartCluster
	I1003 18:39:14.324877   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:39:14.324935   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:39:14.352551   64909 cri.go:89] found id: ""
	I1003 18:39:14.352594   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.352608   64909 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:39:14.352617   64909 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:39:14.352684   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:39:14.376604   64909 cri.go:89] found id: ""
	I1003 18:39:14.376629   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.376638   64909 logs.go:284] No container was found matching "etcd"
	I1003 18:39:14.376643   64909 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:39:14.376750   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:39:14.401480   64909 cri.go:89] found id: ""
	I1003 18:39:14.401504   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.401512   64909 logs.go:284] No container was found matching "coredns"
	I1003 18:39:14.401517   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:39:14.401582   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:39:14.426822   64909 cri.go:89] found id: ""
	I1003 18:39:14.426858   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.426871   64909 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:39:14.426879   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:39:14.426946   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:39:14.451679   64909 cri.go:89] found id: ""
	I1003 18:39:14.451710   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.451722   64909 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:39:14.451730   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:39:14.451787   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:39:14.477253   64909 cri.go:89] found id: ""
	I1003 18:39:14.477275   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.477282   64909 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:39:14.477288   64909 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:39:14.477332   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:39:14.501586   64909 cri.go:89] found id: ""
	I1003 18:39:14.501613   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.501621   64909 logs.go:284] No container was found matching "kindnet"
	I1003 18:39:14.501632   64909 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:39:14.501643   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:39:14.561285   64909 logs.go:123] Gathering logs for container status ...
	I1003 18:39:14.561318   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:39:14.589589   64909 logs.go:123] Gathering logs for kubelet ...
	I1003 18:39:14.589614   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:39:14.656775   64909 logs.go:123] Gathering logs for dmesg ...
	I1003 18:39:14.656809   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:39:14.668000   64909 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:39:14.668023   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:39:14.725446   64909 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:39:14.718419    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.718941    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720510    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720909    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.722416    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:39:14.718419    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.718941    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720510    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720909    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.722416    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1003 18:39:14.725478   64909 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	W1003 18:39:14.725530   64909 out.go:285] * 
	* 
	W1003 18:39:14.725612   64909 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:39:14.725629   64909 out.go:285] * 
	* 
	W1003 18:39:14.727399   64909 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:39:14.731087   64909 out.go:203] 
	W1003 18:39:14.732560   64909 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:39:14.732585   64909 out.go:285] * 
	* 
	I1003 18:39:14.734183   64909 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-linux-amd64 -p ha-422561 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-422561
helpers_test.go:243: (dbg) docker inspect ha-422561:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	        "Created": "2025-10-03T18:31:00.396132938Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 65481,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T18:31:00.428325646Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hostname",
	        "HostsPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hosts",
	        "LogPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512-json.log",
	        "Name": "/ha-422561",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-422561:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-422561",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	                "LowerDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94-init/diff:/var/lib/docker/overlay2/6a517a7375440eba803d7b83fe1e0821915758396dd4d8556ab64fff322a60c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-422561",
	                "Source": "/var/lib/docker/volumes/ha-422561/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-422561",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-422561",
	                "name.minikube.sigs.k8s.io": "ha-422561",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3084976d568ce061948ebe671f279a80502b1d28417f2be7c2497961eac2a5aa",
	            "SandboxKey": "/var/run/docker/netns/3084976d568c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-422561": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:e4:3c:eb:d3:38",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de6aa7ca29f453c0d15cb280abde7ee215f554c89e78e3db8a0f7590468114b5",
	                    "EndpointID": "1b961733d045b77a64efb8afa6caa273125f56ec888f823b790f5454f23ca3b7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-422561",
	                        "eef8fc426b2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561: exit status 6 (297.526594ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:39:15.082474   70083 status.go:458] kubeconfig endpoint: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/StartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/StartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ dashboard      │ --url --port 36195 -p functional-889240 --alsologtostderr -v=1                                                     │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ ssh            │ functional-889240 ssh -- ls -la /mount-9p                                                                          │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh            │ functional-889240 ssh sudo umount -f /mount-9p                                                                     │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ mount          │ -p functional-889240 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1298239377/001:/mount1 --alsologtostderr -v=1 │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ mount          │ -p functional-889240 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1298239377/001:/mount2 --alsologtostderr -v=1 │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ ssh            │ functional-889240 ssh findmnt -T /mount1                                                                           │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ mount          │ -p functional-889240 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1298239377/001:/mount3 --alsologtostderr -v=1 │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ ssh            │ functional-889240 ssh findmnt -T /mount1                                                                           │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh            │ functional-889240 ssh findmnt -T /mount2                                                                           │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh            │ functional-889240 ssh findmnt -T /mount3                                                                           │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ mount          │ -p functional-889240 --kill=true                                                                                   │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ addons         │ functional-889240 addons list                                                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ addons         │ functional-889240 addons list -o json                                                                              │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ update-context │ functional-889240 update-context --alsologtostderr -v=2                                                            │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ update-context │ functional-889240 update-context --alsologtostderr -v=2                                                            │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ update-context │ functional-889240 update-context --alsologtostderr -v=2                                                            │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image          │ functional-889240 image ls --format short --alsologtostderr                                                        │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image          │ functional-889240 image ls --format yaml --alsologtostderr                                                         │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh            │ functional-889240 ssh pgrep buildkitd                                                                              │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ image          │ functional-889240 image ls --format json --alsologtostderr                                                         │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image          │ functional-889240 image ls --format table --alsologtostderr                                                        │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image          │ functional-889240 image build -t localhost/my-image:functional-889240 testdata/build --alsologtostderr             │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:27 UTC │
	│ image          │ functional-889240 image ls                                                                                         │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:27 UTC │ 03 Oct 25 18:27 UTC │
	│ delete         │ -p functional-889240                                                                                               │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:30 UTC │ 03 Oct 25 18:30 UTC │
	│ start          │ ha-422561 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio    │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:30 UTC │                     │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:30:55
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:30:55.351405   64909 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:30:55.351662   64909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:55.351671   64909 out.go:374] Setting ErrFile to fd 2...
	I1003 18:30:55.351675   64909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:55.351854   64909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:30:55.352339   64909 out.go:368] Setting JSON to false
	I1003 18:30:55.353203   64909 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4406,"bootTime":1759511849,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:30:55.353289   64909 start.go:140] virtualization: kvm guest
	I1003 18:30:55.355458   64909 out.go:179] * [ha-422561] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 18:30:55.356815   64909 notify.go:220] Checking for updates...
	I1003 18:30:55.356884   64909 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:30:55.358389   64909 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:30:55.359964   64909 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:30:55.361351   64909 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:30:55.362647   64909 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:30:55.363956   64909 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:30:55.365351   64909 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:30:55.387768   64909 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:30:55.387885   64909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:30:55.443407   64909 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:30:55.433728571 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:30:55.443516   64909 docker.go:318] overlay module found
	I1003 18:30:55.445440   64909 out.go:179] * Using the docker driver based on user configuration
	I1003 18:30:55.446777   64909 start.go:304] selected driver: docker
	I1003 18:30:55.446793   64909 start.go:924] validating driver "docker" against <nil>
	I1003 18:30:55.446808   64909 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:30:55.447403   64909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:30:55.498777   64909 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:30:55.489521827 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:30:55.498958   64909 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1003 18:30:55.499206   64909 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:30:55.501187   64909 out.go:179] * Using Docker driver with root privileges
	I1003 18:30:55.502312   64909 cni.go:84] Creating CNI manager for ""
	I1003 18:30:55.502386   64909 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1003 18:30:55.502397   64909 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 18:30:55.502459   64909 start.go:348] cluster config:
	{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1003 18:30:55.503779   64909 out.go:179] * Starting "ha-422561" primary control-plane node in "ha-422561" cluster
	I1003 18:30:55.504816   64909 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:30:55.506028   64909 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:30:55.507131   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:30:55.507167   64909 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 18:30:55.507169   64909 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:30:55.507175   64909 cache.go:58] Caching tarball of preloaded images
	I1003 18:30:55.507294   64909 preload.go:233] Found /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1003 18:30:55.507311   64909 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 18:30:55.507736   64909 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:30:55.507764   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json: {Name:mk1ece959bac74a473416f0dfc8af04a6136d7b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:30:55.527458   64909 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 18:30:55.527478   64909 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 18:30:55.527494   64909 cache.go:232] Successfully downloaded all kic artifacts
	I1003 18:30:55.527527   64909 start.go:360] acquireMachinesLock for ha-422561: {Name:mk32fd04a5d9b5f89831583bab7d7527f4d187a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:30:55.527631   64909 start.go:364] duration metric: took 81.336µs to acquireMachinesLock for "ha-422561"
	I1003 18:30:55.527657   64909 start.go:93] Provisioning new machine with config: &{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 18:30:55.527748   64909 start.go:125] createHost starting for "" (driver="docker")
	I1003 18:30:55.529663   64909 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1003 18:30:55.529898   64909 start.go:159] libmachine.API.Create for "ha-422561" (driver="docker")
	I1003 18:30:55.529933   64909 client.go:168] LocalClient.Create starting
	I1003 18:30:55.530028   64909 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem
	I1003 18:30:55.530072   64909 main.go:141] libmachine: Decoding PEM data...
	I1003 18:30:55.530097   64909 main.go:141] libmachine: Parsing certificate...
	I1003 18:30:55.530187   64909 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem
	I1003 18:30:55.530226   64909 main.go:141] libmachine: Decoding PEM data...
	I1003 18:30:55.530238   64909 main.go:141] libmachine: Parsing certificate...
	I1003 18:30:55.530612   64909 cli_runner.go:164] Run: docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 18:30:55.547068   64909 cli_runner.go:211] docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 18:30:55.547129   64909 network_create.go:284] running [docker network inspect ha-422561] to gather additional debugging logs...
	I1003 18:30:55.547146   64909 cli_runner.go:164] Run: docker network inspect ha-422561
	W1003 18:30:55.563141   64909 cli_runner.go:211] docker network inspect ha-422561 returned with exit code 1
	I1003 18:30:55.563167   64909 network_create.go:287] error running [docker network inspect ha-422561]: docker network inspect ha-422561: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-422561 not found
	I1003 18:30:55.563179   64909 network_create.go:289] output of [docker network inspect ha-422561]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-422561 not found
	
	** /stderr **
	I1003 18:30:55.563276   64909 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:30:55.579301   64909 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00157b3a0}
	I1003 18:30:55.579336   64909 network_create.go:124] attempt to create docker network ha-422561 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1003 18:30:55.579388   64909 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-422561 ha-422561
	I1003 18:30:55.634233   64909 network_create.go:108] docker network ha-422561 192.168.49.0/24 created
	I1003 18:30:55.634260   64909 kic.go:121] calculated static IP "192.168.49.2" for the "ha-422561" container
	I1003 18:30:55.634318   64909 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 18:30:55.649960   64909 cli_runner.go:164] Run: docker volume create ha-422561 --label name.minikube.sigs.k8s.io=ha-422561 --label created_by.minikube.sigs.k8s.io=true
	I1003 18:30:55.667186   64909 oci.go:103] Successfully created a docker volume ha-422561
	I1003 18:30:55.667250   64909 cli_runner.go:164] Run: docker run --rm --name ha-422561-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-422561 --entrypoint /usr/bin/test -v ha-422561:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1003 18:30:56.041615   64909 oci.go:107] Successfully prepared a docker volume ha-422561
	I1003 18:30:56.041648   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:30:56.041669   64909 kic.go:194] Starting extracting preloaded images to volume ...
	I1003 18:30:56.041727   64909 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-422561:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1003 18:31:00.326417   64909 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-422561:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.284654466s)
	I1003 18:31:00.326457   64909 kic.go:203] duration metric: took 4.284784967s to extract preloaded images to volume ...
	W1003 18:31:00.326567   64909 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1003 18:31:00.326610   64909 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1003 18:31:00.326657   64909 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1003 18:31:00.381592   64909 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-422561 --name ha-422561 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-422561 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-422561 --network ha-422561 --ip 192.168.49.2 --volume ha-422561:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1003 18:31:00.641348   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Running}}
	I1003 18:31:00.659876   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:00.678319   64909 cli_runner.go:164] Run: docker exec ha-422561 stat /var/lib/dpkg/alternatives/iptables
	I1003 18:31:00.728414   64909 oci.go:144] the created container "ha-422561" has a running status.
	I1003 18:31:00.728450   64909 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa...
	I1003 18:31:01.103610   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1003 18:31:01.103663   64909 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1003 18:31:01.128670   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:01.147200   64909 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1003 18:31:01.147218   64909 kic_runner.go:114] Args: [docker exec --privileged ha-422561 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1003 18:31:01.189023   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:01.207395   64909 machine.go:93] provisionDockerMachine start ...
	I1003 18:31:01.207497   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.226029   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.226282   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.226299   64909 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 18:31:01.372245   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:31:01.372275   64909 ubuntu.go:182] provisioning hostname "ha-422561"
	I1003 18:31:01.372335   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.390674   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.390889   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.390902   64909 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-422561 && echo "ha-422561" | sudo tee /etc/hostname
	I1003 18:31:01.544850   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:31:01.544932   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.563695   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.563966   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.564014   64909 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422561' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422561/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422561' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:31:01.708942   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:31:01.708971   64909 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8669/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8669/.minikube}
	I1003 18:31:01.709036   64909 ubuntu.go:190] setting up certificates
	I1003 18:31:01.709048   64909 provision.go:84] configureAuth start
	I1003 18:31:01.709101   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:01.727778   64909 provision.go:143] copyHostCerts
	I1003 18:31:01.727814   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:31:01.727849   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem, removing ...
	I1003 18:31:01.727858   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:31:01.727940   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem (1082 bytes)
	I1003 18:31:01.728054   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:31:01.728079   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem, removing ...
	I1003 18:31:01.728090   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:31:01.728137   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem (1123 bytes)
	I1003 18:31:01.728200   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:31:01.728225   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem, removing ...
	I1003 18:31:01.728234   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:31:01.728266   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem (1675 bytes)
	I1003 18:31:01.728336   64909 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem org=jenkins.ha-422561 san=[127.0.0.1 192.168.49.2 ha-422561 localhost minikube]
	I1003 18:31:01.864219   64909 provision.go:177] copyRemoteCerts
	I1003 18:31:01.864281   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:31:01.864317   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.882069   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:01.982800   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 18:31:01.982877   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1003 18:31:02.000887   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 18:31:02.000952   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 18:31:02.017591   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 18:31:02.017639   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 18:31:02.034172   64909 provision.go:87] duration metric: took 325.10989ms to configureAuth
	I1003 18:31:02.034202   64909 ubuntu.go:206] setting minikube options for container-runtime
	I1003 18:31:02.034393   64909 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:31:02.034508   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.052111   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:02.052326   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:02.052344   64909 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 18:31:02.295594   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 18:31:02.295629   64909 machine.go:96] duration metric: took 1.088207423s to provisionDockerMachine
	I1003 18:31:02.295640   64909 client.go:171] duration metric: took 6.765697238s to LocalClient.Create
	I1003 18:31:02.295660   64909 start.go:167] duration metric: took 6.765761646s to libmachine.API.Create "ha-422561"
	I1003 18:31:02.295669   64909 start.go:293] postStartSetup for "ha-422561" (driver="docker")
	I1003 18:31:02.295682   64909 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:31:02.295752   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:31:02.295789   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.312783   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.414720   64909 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:31:02.418127   64909 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:31:02.418149   64909 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 18:31:02.418159   64909 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/addons for local assets ...
	I1003 18:31:02.418213   64909 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/files for local assets ...
	I1003 18:31:02.418310   64909 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> 122122.pem in /etc/ssl/certs
	I1003 18:31:02.418326   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /etc/ssl/certs/122122.pem
	I1003 18:31:02.418453   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 18:31:02.425623   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:31:02.444405   64909 start.go:296] duration metric: took 148.722871ms for postStartSetup
	I1003 18:31:02.444748   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:02.462226   64909 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:31:02.462456   64909 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:31:02.462495   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.478737   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.575846   64909 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:31:02.580138   64909 start.go:128] duration metric: took 7.052376255s to createHost
	I1003 18:31:02.580160   64909 start.go:83] releasing machines lock for "ha-422561", held for 7.052515614s
	I1003 18:31:02.580230   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:02.596730   64909 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:31:02.596776   64909 ssh_runner.go:195] Run: cat /version.json
	I1003 18:31:02.596798   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.596817   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.613783   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.614183   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.764865   64909 ssh_runner.go:195] Run: systemctl --version
	I1003 18:31:02.771251   64909 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 18:31:02.803643   64909 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 18:31:02.807949   64909 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 18:31:02.808044   64909 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 18:31:02.833024   64909 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 18:31:02.833043   64909 start.go:495] detecting cgroup driver to use...
	I1003 18:31:02.833073   64909 detect.go:190] detected "systemd" cgroup driver on host os
	I1003 18:31:02.833108   64909 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 18:31:02.847613   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 18:31:02.858865   64909 docker.go:218] disabling cri-docker service (if available) ...
	I1003 18:31:02.858910   64909 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 18:31:02.874470   64909 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 18:31:02.890554   64909 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 18:31:02.970342   64909 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 18:31:03.055310   64909 docker.go:234] disabling docker service ...
	I1003 18:31:03.055369   64909 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 18:31:03.072668   64909 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 18:31:03.084308   64909 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 18:31:03.163959   64909 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 18:31:03.241930   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 18:31:03.253863   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:31:03.266905   64909 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 18:31:03.266971   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.276795   64909 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 18:31:03.276848   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.285157   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.293117   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.301070   64909 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:31:03.308489   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.316789   64909 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.329424   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.337651   64909 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:31:03.344839   64909 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:31:03.352026   64909 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:31:03.430894   64909 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 18:31:03.533915   64909 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 18:31:03.534002   64909 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 18:31:03.537783   64909 start.go:563] Will wait 60s for crictl version
	I1003 18:31:03.537838   64909 ssh_runner.go:195] Run: which crictl
	I1003 18:31:03.541393   64909 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 18:31:03.564883   64909 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 18:31:03.564963   64909 ssh_runner.go:195] Run: crio --version
	I1003 18:31:03.591363   64909 ssh_runner.go:195] Run: crio --version
	I1003 18:31:03.619425   64909 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 18:31:03.620466   64909 cli_runner.go:164] Run: docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:31:03.637151   64909 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 18:31:03.641184   64909 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:31:03.651292   64909 kubeadm.go:883] updating cluster {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 18:31:03.651379   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:31:03.651428   64909 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:31:03.680883   64909 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:31:03.680904   64909 crio.go:433] Images already preloaded, skipping extraction
	I1003 18:31:03.680955   64909 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:31:03.706829   64909 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:31:03.706859   64909 cache_images.go:85] Images are preloaded, skipping loading
	I1003 18:31:03.706866   64909 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1003 18:31:03.706953   64909 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422561 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 18:31:03.707032   64909 ssh_runner.go:195] Run: crio config
	I1003 18:31:03.751501   64909 cni.go:84] Creating CNI manager for ""
	I1003 18:31:03.751523   64909 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 18:31:03.751538   64909 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 18:31:03.751558   64909 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-422561 NodeName:ha-422561 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 18:31:03.751669   64909 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-422561"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:31:03.751691   64909 kube-vip.go:115] generating kube-vip config ...
	I1003 18:31:03.751728   64909 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1003 18:31:03.763009   64909 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:31:03.763125   64909 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1003 18:31:03.763181   64909 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 18:31:03.770585   64909 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:31:03.770633   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1003 18:31:03.778069   64909 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1003 18:31:03.790397   64909 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 18:31:03.805112   64909 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1003 18:31:03.817362   64909 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1003 18:31:03.830824   64909 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1003 18:31:03.834300   64909 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:31:03.843861   64909 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:31:03.921407   64909 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:31:03.944431   64909 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561 for IP: 192.168.49.2
	I1003 18:31:03.944451   64909 certs.go:195] generating shared ca certs ...
	I1003 18:31:03.944468   64909 certs.go:227] acquiring lock for ca certs: {Name:mk92d1e8e469cb44d9924ff8abf5ecf0a8ce4e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:03.944607   64909 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key
	I1003 18:31:03.944644   64909 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key
	I1003 18:31:03.944652   64909 certs.go:257] generating profile certs ...
	I1003 18:31:03.944708   64909 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key
	I1003 18:31:03.944722   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt with IP's: []
	I1003 18:31:04.171087   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt ...
	I1003 18:31:04.171118   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt: {Name:mked6cb0f731cbb630d2b187c4975015a458a284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.171291   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key ...
	I1003 18:31:04.171301   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key: {Name:mk0c9f0a0941d99f2af213cd316467f053532c99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.171391   64909 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905
	I1003 18:31:04.171406   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1003 18:31:04.383185   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 ...
	I1003 18:31:04.383218   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905: {Name:mkc24c55d4abb428b3559a93e6e301be2cab703a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.383381   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905 ...
	I1003 18:31:04.383394   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905: {Name:mk0576a73623089a3eecf4e34bbbd214545e2247 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.383486   64909 certs.go:382] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt
	I1003 18:31:04.383601   64909 certs.go:386] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905 -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key
	I1003 18:31:04.383674   64909 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key
	I1003 18:31:04.383689   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt with IP's: []
	I1003 18:31:04.628083   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt ...
	I1003 18:31:04.628112   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt: {Name:mkc19179c67a2559968759165df93d304eb42db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.628269   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key ...
	I1003 18:31:04.628279   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key: {Name:mka8b2392a3d721a70329b852837f3403643f948 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.628347   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 18:31:04.628364   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 18:31:04.628375   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 18:31:04.628384   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 18:31:04.628397   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 18:31:04.628410   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 18:31:04.628430   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 18:31:04.628442   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 18:31:04.628492   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem (1338 bytes)
	W1003 18:31:04.628525   64909 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212_empty.pem, impossibly tiny 0 bytes
	I1003 18:31:04.628535   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:31:04.628558   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:31:04.628580   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:31:04.628601   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem (1675 bytes)
	I1003 18:31:04.628637   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:31:04.628666   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem -> /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.628680   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.628692   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.629254   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:31:04.646879   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 18:31:04.663465   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:31:04.679837   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:31:04.695959   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 18:31:04.712689   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:31:04.729310   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:31:04.745587   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 18:31:04.761663   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem --> /usr/share/ca-certificates/12212.pem (1338 bytes)
	I1003 18:31:04.779546   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /usr/share/ca-certificates/122122.pem (1708 bytes)
	I1003 18:31:04.796119   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:31:04.813748   64909 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:31:04.826629   64909 ssh_runner.go:195] Run: openssl version
	I1003 18:31:04.832848   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122122.pem && ln -fs /usr/share/ca-certificates/122122.pem /etc/ssl/certs/122122.pem"
	I1003 18:31:04.840960   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.844465   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.844506   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.878276   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122122.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:31:04.886714   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:31:04.894672   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.898099   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.898154   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.931606   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:31:04.940357   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12212.pem && ln -fs /usr/share/ca-certificates/12212.pem /etc/ssl/certs/12212.pem"
	I1003 18:31:04.948454   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.952097   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.952148   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.985741   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12212.pem /etc/ssl/certs/51391683.0"
	I1003 18:31:04.994005   64909 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:31:04.997322   64909 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 18:31:04.997379   64909 kubeadm.go:400] StartCluster: {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:31:04.997476   64909 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:31:04.997539   64909 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:31:05.022530   64909 cri.go:89] found id: ""
	I1003 18:31:05.022595   64909 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:31:05.030329   64909 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 18:31:05.037782   64909 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:31:05.037841   64909 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:31:05.045127   64909 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:31:05.045142   64909 kubeadm.go:157] found existing configuration files:
	
	I1003 18:31:05.045174   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 18:31:05.052235   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:31:05.052286   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:31:05.059062   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 18:31:05.066034   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:31:05.066081   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:31:05.072912   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 18:31:05.079906   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:31:05.079966   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:31:05.086575   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 18:31:05.093500   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:31:05.093559   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:31:05.100246   64909 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:31:05.136174   64909 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:31:05.136254   64909 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:31:05.156320   64909 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:31:05.156407   64909 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 18:31:05.156462   64909 kubeadm.go:318] OS: Linux
	I1003 18:31:05.156539   64909 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:31:05.156610   64909 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:31:05.156705   64909 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:31:05.156790   64909 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:31:05.156865   64909 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:31:05.156939   64909 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:31:05.157035   64909 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:31:05.157127   64909 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 18:31:05.210250   64909 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:31:05.210408   64909 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:31:05.210566   64909 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:31:05.217643   64909 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:31:05.219725   64909 out.go:252]   - Generating certificates and keys ...
	I1003 18:31:05.219828   64909 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:31:05.219943   64909 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:31:05.398135   64909 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 18:31:05.511875   64909 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1003 18:31:05.863575   64909 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1003 18:31:06.044823   64909 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1003 18:31:06.083505   64909 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1003 18:31:06.083616   64909 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 18:31:06.181464   64909 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1003 18:31:06.181591   64909 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 18:31:06.345813   64909 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 18:31:06.565989   64909 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 18:31:06.759809   64909 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1003 18:31:06.759892   64909 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:31:06.883072   64909 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:31:07.211268   64909 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:31:07.403076   64909 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:31:07.687412   64909 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:31:08.052476   64909 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:31:08.052957   64909 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:31:08.054984   64909 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:31:08.056889   64909 out.go:252]   - Booting up control plane ...
	I1003 18:31:08.056984   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:31:08.057047   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:31:08.057102   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:31:08.069846   64909 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:31:08.069954   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:31:08.077490   64909 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:31:08.077826   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:31:08.077870   64909 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:31:08.170750   64909 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:31:08.170893   64909 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:31:09.172507   64909 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001794723s
	I1003 18:31:09.175233   64909 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:31:09.175335   64909 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1003 18:31:09.175418   64909 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:31:09.175496   64909 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:35:09.177158   64909 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001064557s
	I1003 18:35:09.177466   64909 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001283425s
	I1003 18:35:09.177673   64909 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00125879s
	I1003 18:35:09.177731   64909 kubeadm.go:318] 
	I1003 18:35:09.177887   64909 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 18:35:09.178114   64909 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:35:09.178320   64909 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 18:35:09.178580   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 18:35:09.178818   64909 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 18:35:09.179017   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 18:35:09.179033   64909 kubeadm.go:318] 
	I1003 18:35:09.182028   64909 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 18:35:09.182304   64909 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:35:09.182918   64909 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1003 18:35:09.183015   64909 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1003 18:35:09.183174   64909 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001794723s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001064557s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001283425s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00125879s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1003 18:35:09.183243   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1003 18:35:11.953646   64909 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.770379999s)
	I1003 18:35:11.953721   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:35:11.965876   64909 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:35:11.965928   64909 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:35:11.973363   64909 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:35:11.973382   64909 kubeadm.go:157] found existing configuration files:
	
	I1003 18:35:11.973419   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 18:35:11.980752   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:35:11.980806   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:35:11.987857   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 18:35:11.995081   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:35:11.995127   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:35:12.001778   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 18:35:12.009063   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:35:12.009126   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:35:12.015927   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 18:35:12.022875   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:35:12.022943   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:35:12.029549   64909 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:35:12.082477   64909 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 18:35:12.138594   64909 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:39:14.312592   64909 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	I1003 18:39:14.312818   64909 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1003 18:39:14.315914   64909 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:39:14.315992   64909 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:39:14.316115   64909 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:39:14.316166   64909 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 18:39:14.316250   64909 kubeadm.go:318] OS: Linux
	I1003 18:39:14.316328   64909 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:39:14.316401   64909 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:39:14.316475   64909 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:39:14.316553   64909 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:39:14.316624   64909 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:39:14.316701   64909 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:39:14.316751   64909 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:39:14.316825   64909 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 18:39:14.316936   64909 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:39:14.317123   64909 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:39:14.317262   64909 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:39:14.317314   64909 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:39:14.319872   64909 out.go:252]   - Generating certificates and keys ...
	I1003 18:39:14.319940   64909 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:39:14.320033   64909 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:39:14.320122   64909 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1003 18:39:14.320186   64909 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1003 18:39:14.320253   64909 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1003 18:39:14.320299   64909 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1003 18:39:14.320350   64909 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1003 18:39:14.320420   64909 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1003 18:39:14.320509   64909 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1003 18:39:14.320604   64909 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1003 18:39:14.320671   64909 kubeadm.go:318] [certs] Using the existing "sa" key
	I1003 18:39:14.320751   64909 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:39:14.320828   64909 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:39:14.320904   64909 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:39:14.321006   64909 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:39:14.321096   64909 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:39:14.321174   64909 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:39:14.321279   64909 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:39:14.321373   64909 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:39:14.322793   64909 out.go:252]   - Booting up control plane ...
	I1003 18:39:14.322884   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:39:14.323004   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:39:14.323072   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:39:14.323162   64909 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:39:14.323237   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:39:14.323335   64909 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:39:14.323415   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:39:14.323456   64909 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:39:14.323557   64909 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:39:14.323652   64909 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:39:14.323702   64909 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001540709s
	I1003 18:39:14.323792   64909 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:39:14.323860   64909 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1003 18:39:14.323946   64909 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:39:14.324043   64909 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:39:14.324124   64909 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	I1003 18:39:14.324186   64909 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	I1003 18:39:14.324248   64909 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	I1003 18:39:14.324258   64909 kubeadm.go:318] 
	I1003 18:39:14.324352   64909 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 18:39:14.324439   64909 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:39:14.324519   64909 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 18:39:14.324595   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 18:39:14.324687   64909 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 18:39:14.324773   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 18:39:14.324799   64909 kubeadm.go:318] 
	I1003 18:39:14.324836   64909 kubeadm.go:402] duration metric: took 8m9.327461574s to StartCluster
	I1003 18:39:14.324877   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:39:14.324935   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:39:14.352551   64909 cri.go:89] found id: ""
	I1003 18:39:14.352594   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.352608   64909 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:39:14.352617   64909 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:39:14.352684   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:39:14.376604   64909 cri.go:89] found id: ""
	I1003 18:39:14.376629   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.376638   64909 logs.go:284] No container was found matching "etcd"
	I1003 18:39:14.376643   64909 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:39:14.376750   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:39:14.401480   64909 cri.go:89] found id: ""
	I1003 18:39:14.401504   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.401512   64909 logs.go:284] No container was found matching "coredns"
	I1003 18:39:14.401517   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:39:14.401582   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:39:14.426822   64909 cri.go:89] found id: ""
	I1003 18:39:14.426858   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.426871   64909 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:39:14.426879   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:39:14.426946   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:39:14.451679   64909 cri.go:89] found id: ""
	I1003 18:39:14.451710   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.451722   64909 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:39:14.451730   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:39:14.451787   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:39:14.477253   64909 cri.go:89] found id: ""
	I1003 18:39:14.477275   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.477282   64909 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:39:14.477288   64909 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:39:14.477332   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:39:14.501586   64909 cri.go:89] found id: ""
	I1003 18:39:14.501613   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.501621   64909 logs.go:284] No container was found matching "kindnet"
	I1003 18:39:14.501632   64909 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:39:14.501643   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:39:14.561285   64909 logs.go:123] Gathering logs for container status ...
	I1003 18:39:14.561318   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:39:14.589589   64909 logs.go:123] Gathering logs for kubelet ...
	I1003 18:39:14.589614   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:39:14.656775   64909 logs.go:123] Gathering logs for dmesg ...
	I1003 18:39:14.656809   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:39:14.668000   64909 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:39:14.668023   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:39:14.725446   64909 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:39:14.718419    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.718941    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720510    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720909    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.722416    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:39:14.718419    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.718941    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720510    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720909    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.722416    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1003 18:39:14.725478   64909 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	W1003 18:39:14.725530   64909 out.go:285] * 
	W1003 18:39:14.725612   64909 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:39:14.725629   64909 out.go:285] * 
	W1003 18:39:14.727399   64909 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:39:14.731087   64909 out.go:203] 
	W1003 18:39:14.732560   64909 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:39:14.732585   64909 out.go:285] * 
	I1003 18:39:14.734183   64909 out.go:203] 
	
	
	==> CRI-O <==
	Oct 03 18:39:06 ha-422561 crio[781]: time="2025-10-03T18:39:06.903867275Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:39:06 ha-422561 crio[781]: time="2025-10-03T18:39:06.904314361Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:39:06 ha-422561 crio[781]: time="2025-10-03T18:39:06.905248074Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:39:06 ha-422561 crio[781]: time="2025-10-03T18:39:06.905730167Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:39:06 ha-422561 crio[781]: time="2025-10-03T18:39:06.921828302Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=a54f7b97-cc69-4b78-aebc-bbf901ae86f0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:39:06 ha-422561 crio[781]: time="2025-10-03T18:39:06.923011689Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=e9b70815-d1a0-487f-8528-b381bee53e83 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:39:06 ha-422561 crio[781]: time="2025-10-03T18:39:06.923222244Z" level=info msg="createCtr: deleting container ID 67abeeabda674663dfd11e9954d4340b87d75171c4615d3b4d13c0ac49e25df6 from idIndex" id=a54f7b97-cc69-4b78-aebc-bbf901ae86f0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:39:06 ha-422561 crio[781]: time="2025-10-03T18:39:06.923245849Z" level=info msg="createCtr: removing container 67abeeabda674663dfd11e9954d4340b87d75171c4615d3b4d13c0ac49e25df6" id=a54f7b97-cc69-4b78-aebc-bbf901ae86f0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:39:06 ha-422561 crio[781]: time="2025-10-03T18:39:06.923272786Z" level=info msg="createCtr: deleting container 67abeeabda674663dfd11e9954d4340b87d75171c4615d3b4d13c0ac49e25df6 from storage" id=a54f7b97-cc69-4b78-aebc-bbf901ae86f0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:39:06 ha-422561 crio[781]: time="2025-10-03T18:39:06.924350218Z" level=info msg="createCtr: deleting container ID 971168468f670b6b38405e62af6b7c66e4447abdbab2029f8b7b4563eea55c8c from idIndex" id=e9b70815-d1a0-487f-8528-b381bee53e83 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:39:06 ha-422561 crio[781]: time="2025-10-03T18:39:06.924379995Z" level=info msg="createCtr: removing container 971168468f670b6b38405e62af6b7c66e4447abdbab2029f8b7b4563eea55c8c" id=e9b70815-d1a0-487f-8528-b381bee53e83 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:39:06 ha-422561 crio[781]: time="2025-10-03T18:39:06.924414425Z" level=info msg="createCtr: deleting container 971168468f670b6b38405e62af6b7c66e4447abdbab2029f8b7b4563eea55c8c from storage" id=e9b70815-d1a0-487f-8528-b381bee53e83 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:39:06 ha-422561 crio[781]: time="2025-10-03T18:39:06.926227218Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-422561_kube-system_6803106e6cb30e1b9b282ce29772fddf_0" id=a54f7b97-cc69-4b78-aebc-bbf901ae86f0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:39:06 ha-422561 crio[781]: time="2025-10-03T18:39:06.92656918Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-422561_kube-system_2640157afe5e174d7402164688eed7be_0" id=e9b70815-d1a0-487f-8528-b381bee53e83 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:39:10 ha-422561 crio[781]: time="2025-10-03T18:39:10.896488369Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=d1d441fb-a8a1-41b7-a8ff-8ff9e35a3084 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:39:10 ha-422561 crio[781]: time="2025-10-03T18:39:10.897280265Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=85b151f8-1a4c-45d5-96ba-7ecb1e35c396 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:39:10 ha-422561 crio[781]: time="2025-10-03T18:39:10.898109927Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-422561/kube-apiserver" id=54a30861-0969-42f9-83a8-892b0c46a0cb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:39:10 ha-422561 crio[781]: time="2025-10-03T18:39:10.898324441Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:39:10 ha-422561 crio[781]: time="2025-10-03T18:39:10.901429905Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:39:10 ha-422561 crio[781]: time="2025-10-03T18:39:10.901809064Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:39:10 ha-422561 crio[781]: time="2025-10-03T18:39:10.917627984Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=54a30861-0969-42f9-83a8-892b0c46a0cb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:39:10 ha-422561 crio[781]: time="2025-10-03T18:39:10.918940242Z" level=info msg="createCtr: deleting container ID 5a3e9b3fbe2464f8a06613740000fa36f1fde3b4a807c0c782f1de403a9c7e82 from idIndex" id=54a30861-0969-42f9-83a8-892b0c46a0cb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:39:10 ha-422561 crio[781]: time="2025-10-03T18:39:10.91897099Z" level=info msg="createCtr: removing container 5a3e9b3fbe2464f8a06613740000fa36f1fde3b4a807c0c782f1de403a9c7e82" id=54a30861-0969-42f9-83a8-892b0c46a0cb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:39:10 ha-422561 crio[781]: time="2025-10-03T18:39:10.9190146Z" level=info msg="createCtr: deleting container 5a3e9b3fbe2464f8a06613740000fa36f1fde3b4a807c0c782f1de403a9c7e82 from storage" id=54a30861-0969-42f9-83a8-892b0c46a0cb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:39:10 ha-422561 crio[781]: time="2025-10-03T18:39:10.921106901Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-422561_kube-system_6ecf19dd95945fcfeaff027fad95c1ee_0" id=54a30861-0969-42f9-83a8-892b0c46a0cb name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:39:15.641564    2717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:15.642095    2717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:15.643551    2717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:15.643972    2717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:15.645284    2717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 3 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001870] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.374530] i8042: Warning: Keylock active
	[  +0.010846] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003424] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000658] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000637] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000691] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.479345] block sda: the capability attribute has been deprecated.
	[  +0.086934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025583] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.992810] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:39:15 up  1:21,  0 user,  load average: 0.05, 0.06, 0.06
	Linux ha-422561 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 03 18:39:06 ha-422561 kubelet[1961]: E1003 18:39:06.926565    1961 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:39:06 ha-422561 kubelet[1961]:         container etcd start failed in pod etcd-ha-422561_kube-system(6803106e6cb30e1b9b282ce29772fddf): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:39:06 ha-422561 kubelet[1961]:  > logger="UnhandledError"
	Oct 03 18:39:06 ha-422561 kubelet[1961]: E1003 18:39:06.926595    1961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-422561" podUID="6803106e6cb30e1b9b282ce29772fddf"
	Oct 03 18:39:06 ha-422561 kubelet[1961]: E1003 18:39:06.926776    1961 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:39:06 ha-422561 kubelet[1961]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:39:06 ha-422561 kubelet[1961]:  > podSandboxID="a10975bd62b256134c3b4cd528b6d141353311ccb4309c6a5b3dea224dc6ecb8"
	Oct 03 18:39:06 ha-422561 kubelet[1961]: E1003 18:39:06.926853    1961 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:39:06 ha-422561 kubelet[1961]:         container kube-scheduler start failed in pod kube-scheduler-ha-422561_kube-system(2640157afe5e174d7402164688eed7be): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:39:06 ha-422561 kubelet[1961]:  > logger="UnhandledError"
	Oct 03 18:39:06 ha-422561 kubelet[1961]: E1003 18:39:06.928004    1961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-422561" podUID="2640157afe5e174d7402164688eed7be"
	Oct 03 18:39:08 ha-422561 kubelet[1961]: E1003 18:39:08.743006    1961 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-422561.186b0ef272ca1d8c  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-422561,UID:ha-422561,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ha-422561 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ha-422561,},FirstTimestamp:2025-10-03 18:35:13.889033612 +0000 UTC m=+0.583840443,LastTimestamp:2025-10-03 18:35:13.889033612 +0000 UTC m=+0.583840443,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-422561,}"
	Oct 03 18:39:10 ha-422561 kubelet[1961]: E1003 18:39:10.019873    1961 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 03 18:39:10 ha-422561 kubelet[1961]: E1003 18:39:10.520193    1961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-422561?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 03 18:39:10 ha-422561 kubelet[1961]: I1003 18:39:10.670115    1961 kubelet_node_status.go:75] "Attempting to register node" node="ha-422561"
	Oct 03 18:39:10 ha-422561 kubelet[1961]: E1003 18:39:10.670498    1961 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-422561"
	Oct 03 18:39:10 ha-422561 kubelet[1961]: E1003 18:39:10.896089    1961 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:39:10 ha-422561 kubelet[1961]: E1003 18:39:10.921356    1961 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:39:10 ha-422561 kubelet[1961]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:39:10 ha-422561 kubelet[1961]:  > podSandboxID="a859763ae69d997e72724d21d35d0ae86fcde7bd11468ef604f5a6d23f35b0f0"
	Oct 03 18:39:10 ha-422561 kubelet[1961]: E1003 18:39:10.921445    1961 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:39:10 ha-422561 kubelet[1961]:         container kube-apiserver start failed in pod kube-apiserver-ha-422561_kube-system(6ecf19dd95945fcfeaff027fad95c1ee): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:39:10 ha-422561 kubelet[1961]:  > logger="UnhandledError"
	Oct 03 18:39:10 ha-422561 kubelet[1961]: E1003 18:39:10.921471    1961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-422561" podUID="6ecf19dd95945fcfeaff027fad95c1ee"
	Oct 03 18:39:13 ha-422561 kubelet[1961]: E1003 18:39:13.909080    1961 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-422561\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561: exit status 6 (295.878699ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:39:16.012678   70407 status.go:458] kubeconfig endpoint: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-422561" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (500.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (95.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (96.211504ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-422561" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 kubectl -- rollout status deployment/busybox: exit status 1 (95.507339ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-422561"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (97.864375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-422561"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1003 18:39:16.316565   12212 retry.go:31] will retry after 963.42379ms: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (97.735449ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-422561"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1003 18:39:17.377971   12212 retry.go:31] will retry after 2.060167564s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (97.36839ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-422561"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1003 18:39:19.537743   12212 retry.go:31] will retry after 2.055825273s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (97.319001ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-422561"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1003 18:39:21.692423   12212 retry.go:31] will retry after 4.01909998s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (97.227329ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-422561"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1003 18:39:25.812043   12212 retry.go:31] will retry after 7.408023495s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (98.52748ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-422561"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1003 18:39:33.320184   12212 retry.go:31] will retry after 8.542172153s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.722438ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-422561"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1003 18:39:41.968686   12212 retry.go:31] will retry after 9.720078099s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (97.829113ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-422561"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1003 18:39:51.796508   12212 retry.go:31] will retry after 22.582803895s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (98.86645ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-422561"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1003 18:40:14.480401   12212 retry.go:31] will retry after 35.171392793s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.048714ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-422561"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (96.836195ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-422561"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 kubectl -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 kubectl -- exec  -- nslookup kubernetes.io: exit status 1 (98.085589ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-422561"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 kubectl -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 kubectl -- exec  -- nslookup kubernetes.default: exit status 1 (100.097598ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-422561"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (97.018929ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-422561"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-422561
helpers_test.go:243: (dbg) docker inspect ha-422561:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	        "Created": "2025-10-03T18:31:00.396132938Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 65481,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T18:31:00.428325646Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hostname",
	        "HostsPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hosts",
	        "LogPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512-json.log",
	        "Name": "/ha-422561",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-422561:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-422561",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	                "LowerDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94-init/diff:/var/lib/docker/overlay2/6a517a7375440eba803d7b83fe1e0821915758396dd4d8556ab64fff322a60c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-422561",
	                "Source": "/var/lib/docker/volumes/ha-422561/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-422561",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-422561",
	                "name.minikube.sigs.k8s.io": "ha-422561",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3084976d568ce061948ebe671f279a80502b1d28417f2be7c2497961eac2a5aa",
	            "SandboxKey": "/var/run/docker/netns/3084976d568c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-422561": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:e4:3c:eb:d3:38",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de6aa7ca29f453c0d15cb280abde7ee215f554c89e78e3db8a0f7590468114b5",
	                    "EndpointID": "1b961733d045b77a64efb8afa6caa273125f56ec888f823b790f5454f23ca3b7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-422561",
	                        "eef8fc426b2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561: exit status 6 (292.154033ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:40:50.449784   71370 status.go:458] kubeconfig endpoint: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-889240 image ls --format short --alsologtostderr                                                     │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image   │ functional-889240 image ls --format yaml --alsologtostderr                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh     │ functional-889240 ssh pgrep buildkitd                                                                           │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ image   │ functional-889240 image ls --format json --alsologtostderr                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image   │ functional-889240 image ls --format table --alsologtostderr                                                     │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image   │ functional-889240 image build -t localhost/my-image:functional-889240 testdata/build --alsologtostderr          │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:27 UTC │
	│ image   │ functional-889240 image ls                                                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:27 UTC │ 03 Oct 25 18:27 UTC │
	│ delete  │ -p functional-889240                                                                                            │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:30 UTC │ 03 Oct 25 18:30 UTC │
	│ start   │ ha-422561 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:30 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- rollout status deployment/busybox                                                          │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:30:55
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:30:55.351405   64909 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:30:55.351662   64909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:55.351671   64909 out.go:374] Setting ErrFile to fd 2...
	I1003 18:30:55.351675   64909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:55.351854   64909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:30:55.352339   64909 out.go:368] Setting JSON to false
	I1003 18:30:55.353203   64909 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4406,"bootTime":1759511849,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:30:55.353289   64909 start.go:140] virtualization: kvm guest
	I1003 18:30:55.355458   64909 out.go:179] * [ha-422561] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 18:30:55.356815   64909 notify.go:220] Checking for updates...
	I1003 18:30:55.356884   64909 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:30:55.358389   64909 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:30:55.359964   64909 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:30:55.361351   64909 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:30:55.362647   64909 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:30:55.363956   64909 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:30:55.365351   64909 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:30:55.387768   64909 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:30:55.387885   64909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:30:55.443407   64909 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:30:55.433728571 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:30:55.443516   64909 docker.go:318] overlay module found
	I1003 18:30:55.445440   64909 out.go:179] * Using the docker driver based on user configuration
	I1003 18:30:55.446777   64909 start.go:304] selected driver: docker
	I1003 18:30:55.446793   64909 start.go:924] validating driver "docker" against <nil>
	I1003 18:30:55.446808   64909 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:30:55.447403   64909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:30:55.498777   64909 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:30:55.489521827 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:30:55.498958   64909 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1003 18:30:55.499206   64909 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:30:55.501187   64909 out.go:179] * Using Docker driver with root privileges
	I1003 18:30:55.502312   64909 cni.go:84] Creating CNI manager for ""
	I1003 18:30:55.502386   64909 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1003 18:30:55.502397   64909 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 18:30:55.502459   64909 start.go:348] cluster config:
	{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1003 18:30:55.503779   64909 out.go:179] * Starting "ha-422561" primary control-plane node in "ha-422561" cluster
	I1003 18:30:55.504816   64909 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:30:55.506028   64909 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:30:55.507131   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:30:55.507167   64909 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 18:30:55.507169   64909 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:30:55.507175   64909 cache.go:58] Caching tarball of preloaded images
	I1003 18:30:55.507294   64909 preload.go:233] Found /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1003 18:30:55.507311   64909 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 18:30:55.507736   64909 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:30:55.507764   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json: {Name:mk1ece959bac74a473416f0dfc8af04a6136d7b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:30:55.527458   64909 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 18:30:55.527478   64909 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 18:30:55.527494   64909 cache.go:232] Successfully downloaded all kic artifacts
	I1003 18:30:55.527527   64909 start.go:360] acquireMachinesLock for ha-422561: {Name:mk32fd04a5d9b5f89831583bab7d7527f4d187a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:30:55.527631   64909 start.go:364] duration metric: took 81.336µs to acquireMachinesLock for "ha-422561"
	I1003 18:30:55.527657   64909 start.go:93] Provisioning new machine with config: &{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 18:30:55.527748   64909 start.go:125] createHost starting for "" (driver="docker")
	I1003 18:30:55.529663   64909 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1003 18:30:55.529898   64909 start.go:159] libmachine.API.Create for "ha-422561" (driver="docker")
	I1003 18:30:55.529933   64909 client.go:168] LocalClient.Create starting
	I1003 18:30:55.530028   64909 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem
	I1003 18:30:55.530072   64909 main.go:141] libmachine: Decoding PEM data...
	I1003 18:30:55.530097   64909 main.go:141] libmachine: Parsing certificate...
	I1003 18:30:55.530187   64909 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem
	I1003 18:30:55.530226   64909 main.go:141] libmachine: Decoding PEM data...
	I1003 18:30:55.530238   64909 main.go:141] libmachine: Parsing certificate...
	I1003 18:30:55.530612   64909 cli_runner.go:164] Run: docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 18:30:55.547068   64909 cli_runner.go:211] docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 18:30:55.547129   64909 network_create.go:284] running [docker network inspect ha-422561] to gather additional debugging logs...
	I1003 18:30:55.547146   64909 cli_runner.go:164] Run: docker network inspect ha-422561
	W1003 18:30:55.563141   64909 cli_runner.go:211] docker network inspect ha-422561 returned with exit code 1
	I1003 18:30:55.563167   64909 network_create.go:287] error running [docker network inspect ha-422561]: docker network inspect ha-422561: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-422561 not found
	I1003 18:30:55.563179   64909 network_create.go:289] output of [docker network inspect ha-422561]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-422561 not found
	
	** /stderr **
	I1003 18:30:55.563276   64909 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:30:55.579301   64909 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00157b3a0}
	I1003 18:30:55.579336   64909 network_create.go:124] attempt to create docker network ha-422561 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1003 18:30:55.579388   64909 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-422561 ha-422561
	I1003 18:30:55.634233   64909 network_create.go:108] docker network ha-422561 192.168.49.0/24 created
	I1003 18:30:55.634260   64909 kic.go:121] calculated static IP "192.168.49.2" for the "ha-422561" container
	I1003 18:30:55.634318   64909 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 18:30:55.649960   64909 cli_runner.go:164] Run: docker volume create ha-422561 --label name.minikube.sigs.k8s.io=ha-422561 --label created_by.minikube.sigs.k8s.io=true
	I1003 18:30:55.667186   64909 oci.go:103] Successfully created a docker volume ha-422561
	I1003 18:30:55.667250   64909 cli_runner.go:164] Run: docker run --rm --name ha-422561-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-422561 --entrypoint /usr/bin/test -v ha-422561:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1003 18:30:56.041615   64909 oci.go:107] Successfully prepared a docker volume ha-422561
	I1003 18:30:56.041648   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:30:56.041669   64909 kic.go:194] Starting extracting preloaded images to volume ...
	I1003 18:30:56.041727   64909 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-422561:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1003 18:31:00.326417   64909 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-422561:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.284654466s)
	I1003 18:31:00.326457   64909 kic.go:203] duration metric: took 4.284784967s to extract preloaded images to volume ...
	W1003 18:31:00.326567   64909 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1003 18:31:00.326610   64909 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1003 18:31:00.326657   64909 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1003 18:31:00.381592   64909 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-422561 --name ha-422561 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-422561 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-422561 --network ha-422561 --ip 192.168.49.2 --volume ha-422561:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1003 18:31:00.641348   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Running}}
	I1003 18:31:00.659876   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:00.678319   64909 cli_runner.go:164] Run: docker exec ha-422561 stat /var/lib/dpkg/alternatives/iptables
	I1003 18:31:00.728414   64909 oci.go:144] the created container "ha-422561" has a running status.
	I1003 18:31:00.728450   64909 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa...
	I1003 18:31:01.103610   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1003 18:31:01.103663   64909 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1003 18:31:01.128670   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:01.147200   64909 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1003 18:31:01.147218   64909 kic_runner.go:114] Args: [docker exec --privileged ha-422561 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1003 18:31:01.189023   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:01.207395   64909 machine.go:93] provisionDockerMachine start ...
	I1003 18:31:01.207497   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.226029   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.226282   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.226299   64909 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 18:31:01.372245   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:31:01.372275   64909 ubuntu.go:182] provisioning hostname "ha-422561"
	I1003 18:31:01.372335   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.390674   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.390889   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.390902   64909 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-422561 && echo "ha-422561" | sudo tee /etc/hostname
	I1003 18:31:01.544850   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:31:01.544932   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.563695   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.563966   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.564014   64909 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422561' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422561/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422561' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:31:01.708942   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:31:01.708971   64909 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8669/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8669/.minikube}
	I1003 18:31:01.709036   64909 ubuntu.go:190] setting up certificates
	I1003 18:31:01.709048   64909 provision.go:84] configureAuth start
	I1003 18:31:01.709101   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:01.727778   64909 provision.go:143] copyHostCerts
	I1003 18:31:01.727814   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:31:01.727849   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem, removing ...
	I1003 18:31:01.727858   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:31:01.727940   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem (1082 bytes)
	I1003 18:31:01.728054   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:31:01.728079   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem, removing ...
	I1003 18:31:01.728090   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:31:01.728137   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem (1123 bytes)
	I1003 18:31:01.728200   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:31:01.728225   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem, removing ...
	I1003 18:31:01.728234   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:31:01.728266   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem (1675 bytes)
	I1003 18:31:01.728336   64909 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem org=jenkins.ha-422561 san=[127.0.0.1 192.168.49.2 ha-422561 localhost minikube]
	I1003 18:31:01.864219   64909 provision.go:177] copyRemoteCerts
	I1003 18:31:01.864281   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:31:01.864317   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.882069   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:01.982800   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 18:31:01.982877   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1003 18:31:02.000887   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 18:31:02.000952   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 18:31:02.017591   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 18:31:02.017639   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 18:31:02.034172   64909 provision.go:87] duration metric: took 325.10989ms to configureAuth
	I1003 18:31:02.034202   64909 ubuntu.go:206] setting minikube options for container-runtime
	I1003 18:31:02.034393   64909 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:31:02.034508   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.052111   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:02.052326   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:02.052344   64909 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 18:31:02.295594   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 18:31:02.295629   64909 machine.go:96] duration metric: took 1.088207423s to provisionDockerMachine
	I1003 18:31:02.295640   64909 client.go:171] duration metric: took 6.765697238s to LocalClient.Create
	I1003 18:31:02.295660   64909 start.go:167] duration metric: took 6.765761646s to libmachine.API.Create "ha-422561"
	I1003 18:31:02.295669   64909 start.go:293] postStartSetup for "ha-422561" (driver="docker")
	I1003 18:31:02.295682   64909 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:31:02.295752   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:31:02.295789   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.312783   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.414720   64909 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:31:02.418127   64909 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:31:02.418149   64909 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 18:31:02.418159   64909 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/addons for local assets ...
	I1003 18:31:02.418213   64909 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/files for local assets ...
	I1003 18:31:02.418310   64909 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> 122122.pem in /etc/ssl/certs
	I1003 18:31:02.418326   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /etc/ssl/certs/122122.pem
	I1003 18:31:02.418453   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 18:31:02.425623   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:31:02.444405   64909 start.go:296] duration metric: took 148.722871ms for postStartSetup
	I1003 18:31:02.444748   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:02.462226   64909 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:31:02.462456   64909 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:31:02.462495   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.478737   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.575846   64909 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:31:02.580138   64909 start.go:128] duration metric: took 7.052376255s to createHost
	I1003 18:31:02.580160   64909 start.go:83] releasing machines lock for "ha-422561", held for 7.052515614s
	I1003 18:31:02.580230   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:02.596730   64909 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:31:02.596776   64909 ssh_runner.go:195] Run: cat /version.json
	I1003 18:31:02.596798   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.596817   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.613783   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.614183   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.764865   64909 ssh_runner.go:195] Run: systemctl --version
	I1003 18:31:02.771251   64909 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 18:31:02.803643   64909 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 18:31:02.807949   64909 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 18:31:02.808044   64909 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 18:31:02.833024   64909 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 18:31:02.833043   64909 start.go:495] detecting cgroup driver to use...
	I1003 18:31:02.833073   64909 detect.go:190] detected "systemd" cgroup driver on host os
	I1003 18:31:02.833108   64909 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 18:31:02.847613   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 18:31:02.858865   64909 docker.go:218] disabling cri-docker service (if available) ...
	I1003 18:31:02.858910   64909 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 18:31:02.874470   64909 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 18:31:02.890554   64909 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 18:31:02.970342   64909 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 18:31:03.055310   64909 docker.go:234] disabling docker service ...
	I1003 18:31:03.055369   64909 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 18:31:03.072668   64909 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 18:31:03.084308   64909 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 18:31:03.163959   64909 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 18:31:03.241930   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 18:31:03.253863   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:31:03.266905   64909 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 18:31:03.266971   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.276795   64909 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 18:31:03.276848   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.285157   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.293117   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.301070   64909 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:31:03.308489   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.316789   64909 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.329424   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.337651   64909 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:31:03.344839   64909 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:31:03.352026   64909 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:31:03.430894   64909 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 18:31:03.533915   64909 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 18:31:03.534002   64909 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 18:31:03.537783   64909 start.go:563] Will wait 60s for crictl version
	I1003 18:31:03.537838   64909 ssh_runner.go:195] Run: which crictl
	I1003 18:31:03.541393   64909 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 18:31:03.564883   64909 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 18:31:03.564963   64909 ssh_runner.go:195] Run: crio --version
	I1003 18:31:03.591363   64909 ssh_runner.go:195] Run: crio --version
	I1003 18:31:03.619425   64909 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 18:31:03.620466   64909 cli_runner.go:164] Run: docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:31:03.637151   64909 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 18:31:03.641184   64909 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:31:03.651292   64909 kubeadm.go:883] updating cluster {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 18:31:03.651379   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:31:03.651428   64909 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:31:03.680883   64909 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:31:03.680904   64909 crio.go:433] Images already preloaded, skipping extraction
	I1003 18:31:03.680955   64909 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:31:03.706829   64909 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:31:03.706859   64909 cache_images.go:85] Images are preloaded, skipping loading
	I1003 18:31:03.706866   64909 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1003 18:31:03.706953   64909 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422561 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 18:31:03.707032   64909 ssh_runner.go:195] Run: crio config
	I1003 18:31:03.751501   64909 cni.go:84] Creating CNI manager for ""
	I1003 18:31:03.751523   64909 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 18:31:03.751538   64909 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 18:31:03.751558   64909 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-422561 NodeName:ha-422561 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 18:31:03.751669   64909 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-422561"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:31:03.751691   64909 kube-vip.go:115] generating kube-vip config ...
	I1003 18:31:03.751728   64909 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1003 18:31:03.763009   64909 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:31:03.763125   64909 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1003 18:31:03.763181   64909 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 18:31:03.770585   64909 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:31:03.770633   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1003 18:31:03.778069   64909 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1003 18:31:03.790397   64909 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 18:31:03.805112   64909 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1003 18:31:03.817362   64909 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1003 18:31:03.830824   64909 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1003 18:31:03.834300   64909 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:31:03.843861   64909 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:31:03.921407   64909 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:31:03.944431   64909 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561 for IP: 192.168.49.2
	I1003 18:31:03.944451   64909 certs.go:195] generating shared ca certs ...
	I1003 18:31:03.944468   64909 certs.go:227] acquiring lock for ca certs: {Name:mk92d1e8e469cb44d9924ff8abf5ecf0a8ce4e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:03.944607   64909 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key
	I1003 18:31:03.944644   64909 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key
	I1003 18:31:03.944652   64909 certs.go:257] generating profile certs ...
	I1003 18:31:03.944708   64909 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key
	I1003 18:31:03.944722   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt with IP's: []
	I1003 18:31:04.171087   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt ...
	I1003 18:31:04.171118   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt: {Name:mked6cb0f731cbb630d2b187c4975015a458a284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.171291   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key ...
	I1003 18:31:04.171301   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key: {Name:mk0c9f0a0941d99f2af213cd316467f053532c99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.171391   64909 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905
	I1003 18:31:04.171406   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1003 18:31:04.383185   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 ...
	I1003 18:31:04.383218   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905: {Name:mkc24c55d4abb428b3559a93e6e301be2cab703a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.383381   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905 ...
	I1003 18:31:04.383394   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905: {Name:mk0576a73623089a3eecf4e34bbbd214545e2247 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.383486   64909 certs.go:382] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt
	I1003 18:31:04.383601   64909 certs.go:386] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905 -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key
	I1003 18:31:04.383674   64909 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key
	I1003 18:31:04.383689   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt with IP's: []
	I1003 18:31:04.628083   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt ...
	I1003 18:31:04.628112   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt: {Name:mkc19179c67a2559968759165df93d304eb42db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.628269   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key ...
	I1003 18:31:04.628279   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key: {Name:mka8b2392a3d721a70329b852837f3403643f948 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.628347   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 18:31:04.628364   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 18:31:04.628375   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 18:31:04.628384   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 18:31:04.628397   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 18:31:04.628410   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 18:31:04.628430   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 18:31:04.628442   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 18:31:04.628492   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem (1338 bytes)
	W1003 18:31:04.628525   64909 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212_empty.pem, impossibly tiny 0 bytes
	I1003 18:31:04.628535   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:31:04.628558   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:31:04.628580   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:31:04.628601   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem (1675 bytes)
	I1003 18:31:04.628637   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:31:04.628666   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem -> /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.628680   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.628692   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.629254   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:31:04.646879   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 18:31:04.663465   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:31:04.679837   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:31:04.695959   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 18:31:04.712689   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:31:04.729310   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:31:04.745587   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 18:31:04.761663   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem --> /usr/share/ca-certificates/12212.pem (1338 bytes)
	I1003 18:31:04.779546   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /usr/share/ca-certificates/122122.pem (1708 bytes)
	I1003 18:31:04.796119   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:31:04.813748   64909 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:31:04.826629   64909 ssh_runner.go:195] Run: openssl version
	I1003 18:31:04.832848   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122122.pem && ln -fs /usr/share/ca-certificates/122122.pem /etc/ssl/certs/122122.pem"
	I1003 18:31:04.840960   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.844465   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.844506   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.878276   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122122.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:31:04.886714   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:31:04.894672   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.898099   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.898154   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.931606   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:31:04.940357   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12212.pem && ln -fs /usr/share/ca-certificates/12212.pem /etc/ssl/certs/12212.pem"
	I1003 18:31:04.948454   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.952097   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.952148   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.985741   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12212.pem /etc/ssl/certs/51391683.0"
	I1003 18:31:04.994005   64909 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:31:04.997322   64909 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 18:31:04.997379   64909 kubeadm.go:400] StartCluster: {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:31:04.997476   64909 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:31:04.997539   64909 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:31:05.022530   64909 cri.go:89] found id: ""
	I1003 18:31:05.022595   64909 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:31:05.030329   64909 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 18:31:05.037782   64909 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:31:05.037841   64909 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:31:05.045127   64909 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:31:05.045142   64909 kubeadm.go:157] found existing configuration files:
	
	I1003 18:31:05.045174   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 18:31:05.052235   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:31:05.052286   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:31:05.059062   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 18:31:05.066034   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:31:05.066081   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:31:05.072912   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 18:31:05.079906   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:31:05.079966   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:31:05.086575   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 18:31:05.093500   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:31:05.093559   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:31:05.100246   64909 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:31:05.136174   64909 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:31:05.136254   64909 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:31:05.156320   64909 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:31:05.156407   64909 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 18:31:05.156462   64909 kubeadm.go:318] OS: Linux
	I1003 18:31:05.156539   64909 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:31:05.156610   64909 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:31:05.156705   64909 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:31:05.156790   64909 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:31:05.156865   64909 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:31:05.156939   64909 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:31:05.157035   64909 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:31:05.157127   64909 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 18:31:05.210250   64909 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:31:05.210408   64909 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:31:05.210566   64909 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:31:05.217643   64909 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:31:05.219725   64909 out.go:252]   - Generating certificates and keys ...
	I1003 18:31:05.219828   64909 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:31:05.219943   64909 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:31:05.398135   64909 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 18:31:05.511875   64909 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1003 18:31:05.863575   64909 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1003 18:31:06.044823   64909 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1003 18:31:06.083505   64909 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1003 18:31:06.083616   64909 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 18:31:06.181464   64909 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1003 18:31:06.181591   64909 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 18:31:06.345813   64909 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 18:31:06.565989   64909 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 18:31:06.759809   64909 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1003 18:31:06.759892   64909 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:31:06.883072   64909 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:31:07.211268   64909 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:31:07.403076   64909 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:31:07.687412   64909 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:31:08.052476   64909 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:31:08.052957   64909 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:31:08.054984   64909 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:31:08.056889   64909 out.go:252]   - Booting up control plane ...
	I1003 18:31:08.056984   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:31:08.057047   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:31:08.057102   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:31:08.069846   64909 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:31:08.069954   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:31:08.077490   64909 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:31:08.077826   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:31:08.077870   64909 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:31:08.170750   64909 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:31:08.170893   64909 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:31:09.172507   64909 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001794723s
	I1003 18:31:09.175233   64909 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:31:09.175335   64909 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1003 18:31:09.175418   64909 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:31:09.175496   64909 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:35:09.177158   64909 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001064557s
	I1003 18:35:09.177466   64909 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001283425s
	I1003 18:35:09.177673   64909 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00125879s
	I1003 18:35:09.177731   64909 kubeadm.go:318] 
	I1003 18:35:09.177887   64909 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 18:35:09.178114   64909 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:35:09.178320   64909 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 18:35:09.178580   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 18:35:09.178818   64909 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 18:35:09.179017   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 18:35:09.179033   64909 kubeadm.go:318] 
	I1003 18:35:09.182028   64909 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 18:35:09.182304   64909 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:35:09.182918   64909 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1003 18:35:09.183015   64909 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1003 18:35:09.183174   64909 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001794723s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001064557s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001283425s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00125879s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1003 18:35:09.183243   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1003 18:35:11.953646   64909 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.770379999s)
	I1003 18:35:11.953721   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:35:11.965876   64909 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:35:11.965928   64909 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:35:11.973363   64909 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:35:11.973382   64909 kubeadm.go:157] found existing configuration files:
	
	I1003 18:35:11.973419   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 18:35:11.980752   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:35:11.980806   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:35:11.987857   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 18:35:11.995081   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:35:11.995127   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:35:12.001778   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 18:35:12.009063   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:35:12.009126   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:35:12.015927   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 18:35:12.022875   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:35:12.022943   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:35:12.029549   64909 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:35:12.082477   64909 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 18:35:12.138594   64909 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:39:14.312592   64909 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	I1003 18:39:14.312818   64909 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1003 18:39:14.315914   64909 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:39:14.315992   64909 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:39:14.316115   64909 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:39:14.316166   64909 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 18:39:14.316250   64909 kubeadm.go:318] OS: Linux
	I1003 18:39:14.316328   64909 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:39:14.316401   64909 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:39:14.316475   64909 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:39:14.316553   64909 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:39:14.316624   64909 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:39:14.316701   64909 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:39:14.316751   64909 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:39:14.316825   64909 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 18:39:14.316936   64909 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:39:14.317123   64909 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:39:14.317262   64909 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:39:14.317314   64909 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:39:14.319872   64909 out.go:252]   - Generating certificates and keys ...
	I1003 18:39:14.319940   64909 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:39:14.320033   64909 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:39:14.320122   64909 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1003 18:39:14.320186   64909 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1003 18:39:14.320253   64909 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1003 18:39:14.320299   64909 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1003 18:39:14.320350   64909 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1003 18:39:14.320420   64909 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1003 18:39:14.320509   64909 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1003 18:39:14.320604   64909 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1003 18:39:14.320671   64909 kubeadm.go:318] [certs] Using the existing "sa" key
	I1003 18:39:14.320751   64909 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:39:14.320828   64909 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:39:14.320904   64909 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:39:14.321006   64909 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:39:14.321096   64909 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:39:14.321174   64909 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:39:14.321279   64909 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:39:14.321373   64909 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:39:14.322793   64909 out.go:252]   - Booting up control plane ...
	I1003 18:39:14.322884   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:39:14.323004   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:39:14.323072   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:39:14.323162   64909 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:39:14.323237   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:39:14.323335   64909 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:39:14.323415   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:39:14.323456   64909 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:39:14.323557   64909 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:39:14.323652   64909 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:39:14.323702   64909 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001540709s
	I1003 18:39:14.323792   64909 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:39:14.323860   64909 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1003 18:39:14.323946   64909 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:39:14.324043   64909 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:39:14.324124   64909 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	I1003 18:39:14.324186   64909 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	I1003 18:39:14.324248   64909 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	I1003 18:39:14.324258   64909 kubeadm.go:318] 
	I1003 18:39:14.324352   64909 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 18:39:14.324439   64909 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:39:14.324519   64909 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 18:39:14.324595   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 18:39:14.324687   64909 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 18:39:14.324773   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 18:39:14.324799   64909 kubeadm.go:318] 
	I1003 18:39:14.324836   64909 kubeadm.go:402] duration metric: took 8m9.327461574s to StartCluster
	I1003 18:39:14.324877   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:39:14.324935   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:39:14.352551   64909 cri.go:89] found id: ""
	I1003 18:39:14.352594   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.352608   64909 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:39:14.352617   64909 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:39:14.352684   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:39:14.376604   64909 cri.go:89] found id: ""
	I1003 18:39:14.376629   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.376638   64909 logs.go:284] No container was found matching "etcd"
	I1003 18:39:14.376643   64909 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:39:14.376750   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:39:14.401480   64909 cri.go:89] found id: ""
	I1003 18:39:14.401504   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.401512   64909 logs.go:284] No container was found matching "coredns"
	I1003 18:39:14.401517   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:39:14.401582   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:39:14.426822   64909 cri.go:89] found id: ""
	I1003 18:39:14.426858   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.426871   64909 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:39:14.426879   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:39:14.426946   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:39:14.451679   64909 cri.go:89] found id: ""
	I1003 18:39:14.451710   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.451722   64909 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:39:14.451730   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:39:14.451787   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:39:14.477253   64909 cri.go:89] found id: ""
	I1003 18:39:14.477275   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.477282   64909 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:39:14.477288   64909 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:39:14.477332   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:39:14.501586   64909 cri.go:89] found id: ""
	I1003 18:39:14.501613   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.501621   64909 logs.go:284] No container was found matching "kindnet"
	I1003 18:39:14.501632   64909 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:39:14.501643   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:39:14.561285   64909 logs.go:123] Gathering logs for container status ...
	I1003 18:39:14.561318   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:39:14.589589   64909 logs.go:123] Gathering logs for kubelet ...
	I1003 18:39:14.589614   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:39:14.656775   64909 logs.go:123] Gathering logs for dmesg ...
	I1003 18:39:14.656809   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:39:14.668000   64909 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:39:14.668023   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:39:14.725446   64909 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:39:14.718419    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.718941    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720510    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720909    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.722416    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:39:14.718419    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.718941    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720510    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720909    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.722416    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1003 18:39:14.725478   64909 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	W1003 18:39:14.725530   64909 out.go:285] * 
	W1003 18:39:14.725612   64909 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:39:14.725629   64909 out.go:285] * 
	W1003 18:39:14.727399   64909 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:39:14.731087   64909 out.go:203] 
	W1003 18:39:14.732560   64909 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:39:14.732585   64909 out.go:285] * 
	I1003 18:39:14.734183   64909 out.go:203] 
	
	
	==> CRI-O <==
	Oct 03 18:40:42 ha-422561 crio[781]: time="2025-10-03T18:40:42.924226182Z" level=info msg="createCtr: removing container 88b5d6e917642dd6a17610e58e1d82ba0173c1bba59697739259e049f795496f" id=f813270a-3b77-4aee-ba4f-286ca5f3c68c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:42 ha-422561 crio[781]: time="2025-10-03T18:40:42.924256755Z" level=info msg="createCtr: deleting container 88b5d6e917642dd6a17610e58e1d82ba0173c1bba59697739259e049f795496f from storage" id=f813270a-3b77-4aee-ba4f-286ca5f3c68c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:42 ha-422561 crio[781]: time="2025-10-03T18:40:42.926130977Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-422561_kube-system_e643a03771f1e72f527532eff2c66a9c_0" id=f813270a-3b77-4aee-ba4f-286ca5f3c68c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:45 ha-422561 crio[781]: time="2025-10-03T18:40:45.896589105Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=6a4d82f0-368a-4e9c-8a68-613aeca5ca6d name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:40:45 ha-422561 crio[781]: time="2025-10-03T18:40:45.897504377Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=34b140e2-aed5-4daf-9de3-b9e65f8ce6db name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:40:45 ha-422561 crio[781]: time="2025-10-03T18:40:45.898312455Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-422561/kube-scheduler" id=c0ff24b2-5327-4713-afb5-961b59b98a21 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:45 ha-422561 crio[781]: time="2025-10-03T18:40:45.898543485Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:45 ha-422561 crio[781]: time="2025-10-03T18:40:45.90171654Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:45 ha-422561 crio[781]: time="2025-10-03T18:40:45.902172797Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:45 ha-422561 crio[781]: time="2025-10-03T18:40:45.918904384Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=c0ff24b2-5327-4713-afb5-961b59b98a21 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:45 ha-422561 crio[781]: time="2025-10-03T18:40:45.920218692Z" level=info msg="createCtr: deleting container ID b0526328ea3bcd02f6dad5a98d49dcec6de935893fc87cf4b7225f9aeb00c5f3 from idIndex" id=c0ff24b2-5327-4713-afb5-961b59b98a21 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:45 ha-422561 crio[781]: time="2025-10-03T18:40:45.920247849Z" level=info msg="createCtr: removing container b0526328ea3bcd02f6dad5a98d49dcec6de935893fc87cf4b7225f9aeb00c5f3" id=c0ff24b2-5327-4713-afb5-961b59b98a21 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:45 ha-422561 crio[781]: time="2025-10-03T18:40:45.920276478Z" level=info msg="createCtr: deleting container b0526328ea3bcd02f6dad5a98d49dcec6de935893fc87cf4b7225f9aeb00c5f3 from storage" id=c0ff24b2-5327-4713-afb5-961b59b98a21 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:45 ha-422561 crio[781]: time="2025-10-03T18:40:45.922174432Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-422561_kube-system_2640157afe5e174d7402164688eed7be_0" id=c0ff24b2-5327-4713-afb5-961b59b98a21 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.895784501Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=7701b0da-d602-45bd-b4be-827842374e9c name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.896698231Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=459c77c9-c9a1-4442-8efd-dba46c09a87b name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.897487621Z" level=info msg="Creating container: kube-system/etcd-ha-422561/etcd" id=8ee50b88-f594-4d65-81a3-5ff4b08ba0ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.897719618Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.902014628Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.902421286Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.918695418Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8ee50b88-f594-4d65-81a3-5ff4b08ba0ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.920064591Z" level=info msg="createCtr: deleting container ID 60eac4f05bb70cc097a023480fc9d2f45ed0628f63763a71867879f1fd5fa153 from idIndex" id=8ee50b88-f594-4d65-81a3-5ff4b08ba0ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.920098966Z" level=info msg="createCtr: removing container 60eac4f05bb70cc097a023480fc9d2f45ed0628f63763a71867879f1fd5fa153" id=8ee50b88-f594-4d65-81a3-5ff4b08ba0ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.920129084Z" level=info msg="createCtr: deleting container 60eac4f05bb70cc097a023480fc9d2f45ed0628f63763a71867879f1fd5fa153 from storage" id=8ee50b88-f594-4d65-81a3-5ff4b08ba0ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.922274937Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-422561_kube-system_6803106e6cb30e1b9b282ce29772fddf_0" id=8ee50b88-f594-4d65-81a3-5ff4b08ba0ee name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:40:51.012948    3053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:40:51.013495    3053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:40:51.015089    3053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:40:51.015449    3053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:40:51.016891    3053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 3 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001870] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.374530] i8042: Warning: Keylock active
	[  +0.010846] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003424] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000658] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000637] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000691] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.479345] block sda: the capability attribute has been deprecated.
	[  +0.086934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025583] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.992810] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:40:51 up  1:23,  0 user,  load average: 0.12, 0.08, 0.07
	Linux ha-422561 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 03 18:40:42 ha-422561 kubelet[1961]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-422561_kube-system(e643a03771f1e72f527532eff2c66a9c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:40:42 ha-422561 kubelet[1961]:  > logger="UnhandledError"
	Oct 03 18:40:42 ha-422561 kubelet[1961]: E1003 18:40:42.926532    1961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-422561" podUID="e643a03771f1e72f527532eff2c66a9c"
	Oct 03 18:40:43 ha-422561 kubelet[1961]: E1003 18:40:43.348285    1961 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-422561.186b0ef272ca351c  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-422561,UID:ha-422561,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-422561 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-422561,},FirstTimestamp:2025-10-03 18:35:13.889039644 +0000 UTC m=+0.583846472,LastTimestamp:2025-10-03 18:35:13.889039644 +0000 UTC m=+0.583846472,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-422561,}"
	Oct 03 18:40:43 ha-422561 kubelet[1961]: E1003 18:40:43.916095    1961 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-422561\" not found"
	Oct 03 18:40:45 ha-422561 kubelet[1961]: E1003 18:40:45.896130    1961 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:40:45 ha-422561 kubelet[1961]: E1003 18:40:45.922434    1961 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:40:45 ha-422561 kubelet[1961]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:40:45 ha-422561 kubelet[1961]:  > podSandboxID="a10975bd62b256134c3b4cd528b6d141353311ccb4309c6a5b3dea224dc6ecb8"
	Oct 03 18:40:45 ha-422561 kubelet[1961]: E1003 18:40:45.922540    1961 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:40:45 ha-422561 kubelet[1961]:         container kube-scheduler start failed in pod kube-scheduler-ha-422561_kube-system(2640157afe5e174d7402164688eed7be): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:40:45 ha-422561 kubelet[1961]:  > logger="UnhandledError"
	Oct 03 18:40:45 ha-422561 kubelet[1961]: E1003 18:40:45.922583    1961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-422561" podUID="2640157afe5e174d7402164688eed7be"
	Oct 03 18:40:46 ha-422561 kubelet[1961]: E1003 18:40:46.895363    1961 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:40:46 ha-422561 kubelet[1961]: E1003 18:40:46.922547    1961 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:40:46 ha-422561 kubelet[1961]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:40:46 ha-422561 kubelet[1961]:  > podSandboxID="d8c61f11856eaf647667c61ede204d0da4f897662d4f66aa1405fe26a28a98f5"
	Oct 03 18:40:46 ha-422561 kubelet[1961]: E1003 18:40:46.922663    1961 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:40:46 ha-422561 kubelet[1961]:         container etcd start failed in pod etcd-ha-422561_kube-system(6803106e6cb30e1b9b282ce29772fddf): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:40:46 ha-422561 kubelet[1961]:  > logger="UnhandledError"
	Oct 03 18:40:46 ha-422561 kubelet[1961]: E1003 18:40:46.922710    1961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-422561" podUID="6803106e6cb30e1b9b282ce29772fddf"
	Oct 03 18:40:48 ha-422561 kubelet[1961]: E1003 18:40:48.535582    1961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-422561?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 03 18:40:48 ha-422561 kubelet[1961]: I1003 18:40:48.695745    1961 kubelet_node_status.go:75] "Attempting to register node" node="ha-422561"
	Oct 03 18:40:48 ha-422561 kubelet[1961]: E1003 18:40:48.696172    1961 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-422561"
	Oct 03 18:40:49 ha-422561 kubelet[1961]: E1003 18:40:49.418760    1961 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561: exit status 6 (297.313618ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:40:51.383167   71693 status.go:458] kubeconfig endpoint: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-422561" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (95.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (99.221928ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-422561"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-422561
helpers_test.go:243: (dbg) docker inspect ha-422561:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	        "Created": "2025-10-03T18:31:00.396132938Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 65481,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T18:31:00.428325646Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hostname",
	        "HostsPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hosts",
	        "LogPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512-json.log",
	        "Name": "/ha-422561",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-422561:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-422561",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	                "LowerDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94-init/diff:/var/lib/docker/overlay2/6a517a7375440eba803d7b83fe1e0821915758396dd4d8556ab64fff322a60c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-422561",
	                "Source": "/var/lib/docker/volumes/ha-422561/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-422561",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-422561",
	                "name.minikube.sigs.k8s.io": "ha-422561",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3084976d568ce061948ebe671f279a80502b1d28417f2be7c2497961eac2a5aa",
	            "SandboxKey": "/var/run/docker/netns/3084976d568c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-422561": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:e4:3c:eb:d3:38",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de6aa7ca29f453c0d15cb280abde7ee215f554c89e78e3db8a0f7590468114b5",
	                    "EndpointID": "1b961733d045b77a64efb8afa6caa273125f56ec888f823b790f5454f23ca3b7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-422561",
	                        "eef8fc426b2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561: exit status 6 (294.747427ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:40:51.796412   71836 status.go:458] kubeconfig endpoint: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-889240 image ls --format yaml --alsologtostderr                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ ssh     │ functional-889240 ssh pgrep buildkitd                                                                           │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ image   │ functional-889240 image ls --format json --alsologtostderr                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image   │ functional-889240 image ls --format table --alsologtostderr                                                     │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image   │ functional-889240 image build -t localhost/my-image:functional-889240 testdata/build --alsologtostderr          │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:27 UTC │
	│ image   │ functional-889240 image ls                                                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:27 UTC │ 03 Oct 25 18:27 UTC │
	│ delete  │ -p functional-889240                                                                                            │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:30 UTC │ 03 Oct 25 18:30 UTC │
	│ start   │ ha-422561 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:30 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- rollout status deployment/busybox                                                          │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:30:55
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:30:55.351405   64909 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:30:55.351662   64909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:55.351671   64909 out.go:374] Setting ErrFile to fd 2...
	I1003 18:30:55.351675   64909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:55.351854   64909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:30:55.352339   64909 out.go:368] Setting JSON to false
	I1003 18:30:55.353203   64909 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4406,"bootTime":1759511849,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:30:55.353289   64909 start.go:140] virtualization: kvm guest
	I1003 18:30:55.355458   64909 out.go:179] * [ha-422561] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 18:30:55.356815   64909 notify.go:220] Checking for updates...
	I1003 18:30:55.356884   64909 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:30:55.358389   64909 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:30:55.359964   64909 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:30:55.361351   64909 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:30:55.362647   64909 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:30:55.363956   64909 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:30:55.365351   64909 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:30:55.387768   64909 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:30:55.387885   64909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:30:55.443407   64909 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:30:55.433728571 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:30:55.443516   64909 docker.go:318] overlay module found
	I1003 18:30:55.445440   64909 out.go:179] * Using the docker driver based on user configuration
	I1003 18:30:55.446777   64909 start.go:304] selected driver: docker
	I1003 18:30:55.446793   64909 start.go:924] validating driver "docker" against <nil>
	I1003 18:30:55.446808   64909 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:30:55.447403   64909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:30:55.498777   64909 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:30:55.489521827 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:30:55.498958   64909 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1003 18:30:55.499206   64909 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:30:55.501187   64909 out.go:179] * Using Docker driver with root privileges
	I1003 18:30:55.502312   64909 cni.go:84] Creating CNI manager for ""
	I1003 18:30:55.502386   64909 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1003 18:30:55.502397   64909 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 18:30:55.502459   64909 start.go:348] cluster config:
	{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1003 18:30:55.503779   64909 out.go:179] * Starting "ha-422561" primary control-plane node in "ha-422561" cluster
	I1003 18:30:55.504816   64909 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:30:55.506028   64909 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:30:55.507131   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:30:55.507167   64909 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 18:30:55.507169   64909 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:30:55.507175   64909 cache.go:58] Caching tarball of preloaded images
	I1003 18:30:55.507294   64909 preload.go:233] Found /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1003 18:30:55.507311   64909 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 18:30:55.507736   64909 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:30:55.507764   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json: {Name:mk1ece959bac74a473416f0dfc8af04a6136d7b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:30:55.527458   64909 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 18:30:55.527478   64909 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 18:30:55.527494   64909 cache.go:232] Successfully downloaded all kic artifacts
	I1003 18:30:55.527527   64909 start.go:360] acquireMachinesLock for ha-422561: {Name:mk32fd04a5d9b5f89831583bab7d7527f4d187a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:30:55.527631   64909 start.go:364] duration metric: took 81.336µs to acquireMachinesLock for "ha-422561"
	I1003 18:30:55.527657   64909 start.go:93] Provisioning new machine with config: &{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 18:30:55.527748   64909 start.go:125] createHost starting for "" (driver="docker")
	I1003 18:30:55.529663   64909 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1003 18:30:55.529898   64909 start.go:159] libmachine.API.Create for "ha-422561" (driver="docker")
	I1003 18:30:55.529933   64909 client.go:168] LocalClient.Create starting
	I1003 18:30:55.530028   64909 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem
	I1003 18:30:55.530072   64909 main.go:141] libmachine: Decoding PEM data...
	I1003 18:30:55.530097   64909 main.go:141] libmachine: Parsing certificate...
	I1003 18:30:55.530187   64909 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem
	I1003 18:30:55.530226   64909 main.go:141] libmachine: Decoding PEM data...
	I1003 18:30:55.530238   64909 main.go:141] libmachine: Parsing certificate...
	I1003 18:30:55.530612   64909 cli_runner.go:164] Run: docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 18:30:55.547068   64909 cli_runner.go:211] docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 18:30:55.547129   64909 network_create.go:284] running [docker network inspect ha-422561] to gather additional debugging logs...
	I1003 18:30:55.547146   64909 cli_runner.go:164] Run: docker network inspect ha-422561
	W1003 18:30:55.563141   64909 cli_runner.go:211] docker network inspect ha-422561 returned with exit code 1
	I1003 18:30:55.563167   64909 network_create.go:287] error running [docker network inspect ha-422561]: docker network inspect ha-422561: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-422561 not found
	I1003 18:30:55.563179   64909 network_create.go:289] output of [docker network inspect ha-422561]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-422561 not found
	
	** /stderr **
	I1003 18:30:55.563276   64909 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:30:55.579301   64909 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00157b3a0}
	I1003 18:30:55.579336   64909 network_create.go:124] attempt to create docker network ha-422561 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1003 18:30:55.579388   64909 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-422561 ha-422561
	I1003 18:30:55.634233   64909 network_create.go:108] docker network ha-422561 192.168.49.0/24 created
	I1003 18:30:55.634260   64909 kic.go:121] calculated static IP "192.168.49.2" for the "ha-422561" container
	I1003 18:30:55.634318   64909 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 18:30:55.649960   64909 cli_runner.go:164] Run: docker volume create ha-422561 --label name.minikube.sigs.k8s.io=ha-422561 --label created_by.minikube.sigs.k8s.io=true
	I1003 18:30:55.667186   64909 oci.go:103] Successfully created a docker volume ha-422561
	I1003 18:30:55.667250   64909 cli_runner.go:164] Run: docker run --rm --name ha-422561-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-422561 --entrypoint /usr/bin/test -v ha-422561:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1003 18:30:56.041615   64909 oci.go:107] Successfully prepared a docker volume ha-422561
	I1003 18:30:56.041648   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:30:56.041669   64909 kic.go:194] Starting extracting preloaded images to volume ...
	I1003 18:30:56.041727   64909 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-422561:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1003 18:31:00.326417   64909 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-422561:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.284654466s)
	I1003 18:31:00.326457   64909 kic.go:203] duration metric: took 4.284784967s to extract preloaded images to volume ...
	W1003 18:31:00.326567   64909 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1003 18:31:00.326610   64909 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1003 18:31:00.326657   64909 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1003 18:31:00.381592   64909 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-422561 --name ha-422561 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-422561 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-422561 --network ha-422561 --ip 192.168.49.2 --volume ha-422561:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1003 18:31:00.641348   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Running}}
	I1003 18:31:00.659876   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:00.678319   64909 cli_runner.go:164] Run: docker exec ha-422561 stat /var/lib/dpkg/alternatives/iptables
	I1003 18:31:00.728414   64909 oci.go:144] the created container "ha-422561" has a running status.
	I1003 18:31:00.728450   64909 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa...
	I1003 18:31:01.103610   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1003 18:31:01.103663   64909 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1003 18:31:01.128670   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:01.147200   64909 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1003 18:31:01.147218   64909 kic_runner.go:114] Args: [docker exec --privileged ha-422561 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1003 18:31:01.189023   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:01.207395   64909 machine.go:93] provisionDockerMachine start ...
	I1003 18:31:01.207497   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.226029   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.226282   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.226299   64909 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 18:31:01.372245   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:31:01.372275   64909 ubuntu.go:182] provisioning hostname "ha-422561"
	I1003 18:31:01.372335   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.390674   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.390889   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.390902   64909 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-422561 && echo "ha-422561" | sudo tee /etc/hostname
	I1003 18:31:01.544850   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:31:01.544932   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.563695   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.563966   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.564014   64909 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422561' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422561/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422561' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:31:01.708942   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:31:01.708971   64909 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8669/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8669/.minikube}
	I1003 18:31:01.709036   64909 ubuntu.go:190] setting up certificates
	I1003 18:31:01.709048   64909 provision.go:84] configureAuth start
	I1003 18:31:01.709101   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:01.727778   64909 provision.go:143] copyHostCerts
	I1003 18:31:01.727814   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:31:01.727849   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem, removing ...
	I1003 18:31:01.727858   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:31:01.727940   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem (1082 bytes)
	I1003 18:31:01.728054   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:31:01.728079   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem, removing ...
	I1003 18:31:01.728090   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:31:01.728137   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem (1123 bytes)
	I1003 18:31:01.728200   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:31:01.728225   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem, removing ...
	I1003 18:31:01.728234   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:31:01.728266   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem (1675 bytes)
	I1003 18:31:01.728336   64909 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem org=jenkins.ha-422561 san=[127.0.0.1 192.168.49.2 ha-422561 localhost minikube]
	I1003 18:31:01.864219   64909 provision.go:177] copyRemoteCerts
	I1003 18:31:01.864281   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:31:01.864317   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.882069   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:01.982800   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 18:31:01.982877   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1003 18:31:02.000887   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 18:31:02.000952   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 18:31:02.017591   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 18:31:02.017639   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 18:31:02.034172   64909 provision.go:87] duration metric: took 325.10989ms to configureAuth
	I1003 18:31:02.034202   64909 ubuntu.go:206] setting minikube options for container-runtime
	I1003 18:31:02.034393   64909 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:31:02.034508   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.052111   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:02.052326   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:02.052344   64909 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 18:31:02.295594   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 18:31:02.295629   64909 machine.go:96] duration metric: took 1.088207423s to provisionDockerMachine
	I1003 18:31:02.295640   64909 client.go:171] duration metric: took 6.765697238s to LocalClient.Create
	I1003 18:31:02.295660   64909 start.go:167] duration metric: took 6.765761646s to libmachine.API.Create "ha-422561"
	I1003 18:31:02.295669   64909 start.go:293] postStartSetup for "ha-422561" (driver="docker")
	I1003 18:31:02.295682   64909 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:31:02.295752   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:31:02.295789   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.312783   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.414720   64909 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:31:02.418127   64909 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:31:02.418149   64909 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 18:31:02.418159   64909 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/addons for local assets ...
	I1003 18:31:02.418213   64909 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/files for local assets ...
	I1003 18:31:02.418310   64909 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> 122122.pem in /etc/ssl/certs
	I1003 18:31:02.418326   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /etc/ssl/certs/122122.pem
	I1003 18:31:02.418453   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 18:31:02.425623   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:31:02.444405   64909 start.go:296] duration metric: took 148.722871ms for postStartSetup
	I1003 18:31:02.444748   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:02.462226   64909 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:31:02.462456   64909 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:31:02.462495   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.478737   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.575846   64909 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:31:02.580138   64909 start.go:128] duration metric: took 7.052376255s to createHost
	I1003 18:31:02.580160   64909 start.go:83] releasing machines lock for "ha-422561", held for 7.052515614s
	I1003 18:31:02.580230   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:02.596730   64909 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:31:02.596776   64909 ssh_runner.go:195] Run: cat /version.json
	I1003 18:31:02.596798   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.596817   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.613783   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.614183   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.764865   64909 ssh_runner.go:195] Run: systemctl --version
	I1003 18:31:02.771251   64909 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 18:31:02.803643   64909 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 18:31:02.807949   64909 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 18:31:02.808044   64909 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 18:31:02.833024   64909 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 18:31:02.833043   64909 start.go:495] detecting cgroup driver to use...
	I1003 18:31:02.833073   64909 detect.go:190] detected "systemd" cgroup driver on host os
	I1003 18:31:02.833108   64909 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 18:31:02.847613   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 18:31:02.858865   64909 docker.go:218] disabling cri-docker service (if available) ...
	I1003 18:31:02.858910   64909 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 18:31:02.874470   64909 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 18:31:02.890554   64909 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 18:31:02.970342   64909 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 18:31:03.055310   64909 docker.go:234] disabling docker service ...
	I1003 18:31:03.055369   64909 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 18:31:03.072668   64909 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 18:31:03.084308   64909 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 18:31:03.163959   64909 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 18:31:03.241930   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 18:31:03.253863   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:31:03.266905   64909 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 18:31:03.266971   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.276795   64909 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 18:31:03.276848   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.285157   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.293117   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.301070   64909 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:31:03.308489   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.316789   64909 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.329424   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.337651   64909 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:31:03.344839   64909 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:31:03.352026   64909 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:31:03.430894   64909 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 18:31:03.533915   64909 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 18:31:03.534002   64909 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 18:31:03.537783   64909 start.go:563] Will wait 60s for crictl version
	I1003 18:31:03.537838   64909 ssh_runner.go:195] Run: which crictl
	I1003 18:31:03.541393   64909 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 18:31:03.564883   64909 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 18:31:03.564963   64909 ssh_runner.go:195] Run: crio --version
	I1003 18:31:03.591363   64909 ssh_runner.go:195] Run: crio --version
	I1003 18:31:03.619425   64909 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 18:31:03.620466   64909 cli_runner.go:164] Run: docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:31:03.637151   64909 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 18:31:03.641184   64909 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:31:03.651292   64909 kubeadm.go:883] updating cluster {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 18:31:03.651379   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:31:03.651428   64909 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:31:03.680883   64909 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:31:03.680904   64909 crio.go:433] Images already preloaded, skipping extraction
	I1003 18:31:03.680955   64909 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:31:03.706829   64909 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:31:03.706859   64909 cache_images.go:85] Images are preloaded, skipping loading
	I1003 18:31:03.706866   64909 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1003 18:31:03.706953   64909 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422561 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 18:31:03.707032   64909 ssh_runner.go:195] Run: crio config
	I1003 18:31:03.751501   64909 cni.go:84] Creating CNI manager for ""
	I1003 18:31:03.751523   64909 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 18:31:03.751538   64909 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 18:31:03.751558   64909 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-422561 NodeName:ha-422561 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 18:31:03.751669   64909 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-422561"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:31:03.751691   64909 kube-vip.go:115] generating kube-vip config ...
	I1003 18:31:03.751728   64909 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1003 18:31:03.763009   64909 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:31:03.763125   64909 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1003 18:31:03.763181   64909 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 18:31:03.770585   64909 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:31:03.770633   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1003 18:31:03.778069   64909 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1003 18:31:03.790397   64909 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 18:31:03.805112   64909 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1003 18:31:03.817362   64909 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1003 18:31:03.830824   64909 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1003 18:31:03.834300   64909 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:31:03.843861   64909 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:31:03.921407   64909 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:31:03.944431   64909 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561 for IP: 192.168.49.2
	I1003 18:31:03.944451   64909 certs.go:195] generating shared ca certs ...
	I1003 18:31:03.944468   64909 certs.go:227] acquiring lock for ca certs: {Name:mk92d1e8e469cb44d9924ff8abf5ecf0a8ce4e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:03.944607   64909 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key
	I1003 18:31:03.944644   64909 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key
	I1003 18:31:03.944652   64909 certs.go:257] generating profile certs ...
	I1003 18:31:03.944708   64909 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key
	I1003 18:31:03.944722   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt with IP's: []
	I1003 18:31:04.171087   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt ...
	I1003 18:31:04.171118   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt: {Name:mked6cb0f731cbb630d2b187c4975015a458a284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.171291   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key ...
	I1003 18:31:04.171301   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key: {Name:mk0c9f0a0941d99f2af213cd316467f053532c99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.171391   64909 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905
	I1003 18:31:04.171406   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1003 18:31:04.383185   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 ...
	I1003 18:31:04.383218   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905: {Name:mkc24c55d4abb428b3559a93e6e301be2cab703a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.383381   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905 ...
	I1003 18:31:04.383394   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905: {Name:mk0576a73623089a3eecf4e34bbbd214545e2247 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.383486   64909 certs.go:382] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt
	I1003 18:31:04.383601   64909 certs.go:386] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905 -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key
	I1003 18:31:04.383674   64909 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key
	I1003 18:31:04.383689   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt with IP's: []
	I1003 18:31:04.628083   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt ...
	I1003 18:31:04.628112   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt: {Name:mkc19179c67a2559968759165df93d304eb42db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.628269   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key ...
	I1003 18:31:04.628279   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key: {Name:mka8b2392a3d721a70329b852837f3403643f948 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.628347   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 18:31:04.628364   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 18:31:04.628375   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 18:31:04.628384   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 18:31:04.628397   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 18:31:04.628410   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 18:31:04.628430   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 18:31:04.628442   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 18:31:04.628492   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem (1338 bytes)
	W1003 18:31:04.628525   64909 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212_empty.pem, impossibly tiny 0 bytes
	I1003 18:31:04.628535   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:31:04.628558   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:31:04.628580   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:31:04.628601   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem (1675 bytes)
	I1003 18:31:04.628637   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:31:04.628666   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem -> /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.628680   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.628692   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.629254   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:31:04.646879   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 18:31:04.663465   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:31:04.679837   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:31:04.695959   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 18:31:04.712689   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:31:04.729310   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:31:04.745587   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 18:31:04.761663   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem --> /usr/share/ca-certificates/12212.pem (1338 bytes)
	I1003 18:31:04.779546   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /usr/share/ca-certificates/122122.pem (1708 bytes)
	I1003 18:31:04.796119   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:31:04.813748   64909 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:31:04.826629   64909 ssh_runner.go:195] Run: openssl version
	I1003 18:31:04.832848   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122122.pem && ln -fs /usr/share/ca-certificates/122122.pem /etc/ssl/certs/122122.pem"
	I1003 18:31:04.840960   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.844465   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.844506   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.878276   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122122.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:31:04.886714   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:31:04.894672   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.898099   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.898154   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.931606   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:31:04.940357   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12212.pem && ln -fs /usr/share/ca-certificates/12212.pem /etc/ssl/certs/12212.pem"
	I1003 18:31:04.948454   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.952097   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.952148   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.985741   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12212.pem /etc/ssl/certs/51391683.0"
	I1003 18:31:04.994005   64909 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:31:04.997322   64909 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 18:31:04.997379   64909 kubeadm.go:400] StartCluster: {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:31:04.997476   64909 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:31:04.997539   64909 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:31:05.022530   64909 cri.go:89] found id: ""
	I1003 18:31:05.022595   64909 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:31:05.030329   64909 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 18:31:05.037782   64909 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:31:05.037841   64909 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:31:05.045127   64909 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:31:05.045142   64909 kubeadm.go:157] found existing configuration files:
	
	I1003 18:31:05.045174   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 18:31:05.052235   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:31:05.052286   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:31:05.059062   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 18:31:05.066034   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:31:05.066081   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:31:05.072912   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 18:31:05.079906   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:31:05.079966   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:31:05.086575   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 18:31:05.093500   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:31:05.093559   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:31:05.100246   64909 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:31:05.136174   64909 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:31:05.136254   64909 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:31:05.156320   64909 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:31:05.156407   64909 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 18:31:05.156462   64909 kubeadm.go:318] OS: Linux
	I1003 18:31:05.156539   64909 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:31:05.156610   64909 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:31:05.156705   64909 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:31:05.156790   64909 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:31:05.156865   64909 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:31:05.156939   64909 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:31:05.157035   64909 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:31:05.157127   64909 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 18:31:05.210250   64909 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:31:05.210408   64909 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:31:05.210566   64909 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:31:05.217643   64909 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:31:05.219725   64909 out.go:252]   - Generating certificates and keys ...
	I1003 18:31:05.219828   64909 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:31:05.219943   64909 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:31:05.398135   64909 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 18:31:05.511875   64909 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1003 18:31:05.863575   64909 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1003 18:31:06.044823   64909 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1003 18:31:06.083505   64909 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1003 18:31:06.083616   64909 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 18:31:06.181464   64909 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1003 18:31:06.181591   64909 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 18:31:06.345813   64909 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 18:31:06.565989   64909 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 18:31:06.759809   64909 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1003 18:31:06.759892   64909 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:31:06.883072   64909 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:31:07.211268   64909 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:31:07.403076   64909 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:31:07.687412   64909 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:31:08.052476   64909 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:31:08.052957   64909 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:31:08.054984   64909 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:31:08.056889   64909 out.go:252]   - Booting up control plane ...
	I1003 18:31:08.056984   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:31:08.057047   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:31:08.057102   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:31:08.069846   64909 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:31:08.069954   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:31:08.077490   64909 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:31:08.077826   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:31:08.077870   64909 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:31:08.170750   64909 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:31:08.170893   64909 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:31:09.172507   64909 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001794723s
	I1003 18:31:09.175233   64909 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:31:09.175335   64909 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1003 18:31:09.175418   64909 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:31:09.175496   64909 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:35:09.177158   64909 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001064557s
	I1003 18:35:09.177466   64909 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001283425s
	I1003 18:35:09.177673   64909 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00125879s
	I1003 18:35:09.177731   64909 kubeadm.go:318] 
	I1003 18:35:09.177887   64909 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 18:35:09.178114   64909 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:35:09.178320   64909 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 18:35:09.178580   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 18:35:09.178818   64909 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 18:35:09.179017   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 18:35:09.179033   64909 kubeadm.go:318] 
	I1003 18:35:09.182028   64909 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 18:35:09.182304   64909 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:35:09.182918   64909 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1003 18:35:09.183015   64909 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1003 18:35:09.183174   64909 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001794723s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001064557s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001283425s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00125879s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1003 18:35:09.183243   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1003 18:35:11.953646   64909 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.770379999s)
	I1003 18:35:11.953721   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:35:11.965876   64909 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:35:11.965928   64909 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:35:11.973363   64909 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:35:11.973382   64909 kubeadm.go:157] found existing configuration files:
	
	I1003 18:35:11.973419   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 18:35:11.980752   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:35:11.980806   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:35:11.987857   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 18:35:11.995081   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:35:11.995127   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:35:12.001778   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 18:35:12.009063   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:35:12.009126   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:35:12.015927   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 18:35:12.022875   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:35:12.022943   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:35:12.029549   64909 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:35:12.082477   64909 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 18:35:12.138594   64909 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:39:14.312592   64909 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	I1003 18:39:14.312818   64909 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1003 18:39:14.315914   64909 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:39:14.315992   64909 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:39:14.316115   64909 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:39:14.316166   64909 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 18:39:14.316250   64909 kubeadm.go:318] OS: Linux
	I1003 18:39:14.316328   64909 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:39:14.316401   64909 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:39:14.316475   64909 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:39:14.316553   64909 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:39:14.316624   64909 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:39:14.316701   64909 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:39:14.316751   64909 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:39:14.316825   64909 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 18:39:14.316936   64909 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:39:14.317123   64909 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:39:14.317262   64909 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:39:14.317314   64909 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:39:14.319872   64909 out.go:252]   - Generating certificates and keys ...
	I1003 18:39:14.319940   64909 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:39:14.320033   64909 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:39:14.320122   64909 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1003 18:39:14.320186   64909 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1003 18:39:14.320253   64909 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1003 18:39:14.320299   64909 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1003 18:39:14.320350   64909 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1003 18:39:14.320420   64909 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1003 18:39:14.320509   64909 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1003 18:39:14.320604   64909 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1003 18:39:14.320671   64909 kubeadm.go:318] [certs] Using the existing "sa" key
	I1003 18:39:14.320751   64909 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:39:14.320828   64909 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:39:14.320904   64909 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:39:14.321006   64909 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:39:14.321096   64909 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:39:14.321174   64909 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:39:14.321279   64909 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:39:14.321373   64909 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:39:14.322793   64909 out.go:252]   - Booting up control plane ...
	I1003 18:39:14.322884   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:39:14.323004   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:39:14.323072   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:39:14.323162   64909 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:39:14.323237   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:39:14.323335   64909 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:39:14.323415   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:39:14.323456   64909 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:39:14.323557   64909 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:39:14.323652   64909 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:39:14.323702   64909 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001540709s
	I1003 18:39:14.323792   64909 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:39:14.323860   64909 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1003 18:39:14.323946   64909 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:39:14.324043   64909 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:39:14.324124   64909 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	I1003 18:39:14.324186   64909 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	I1003 18:39:14.324248   64909 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	I1003 18:39:14.324258   64909 kubeadm.go:318] 
	I1003 18:39:14.324352   64909 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 18:39:14.324439   64909 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:39:14.324519   64909 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 18:39:14.324595   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 18:39:14.324687   64909 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 18:39:14.324773   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 18:39:14.324799   64909 kubeadm.go:318] 
	I1003 18:39:14.324836   64909 kubeadm.go:402] duration metric: took 8m9.327461574s to StartCluster
	I1003 18:39:14.324877   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:39:14.324935   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:39:14.352551   64909 cri.go:89] found id: ""
	I1003 18:39:14.352594   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.352608   64909 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:39:14.352617   64909 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:39:14.352684   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:39:14.376604   64909 cri.go:89] found id: ""
	I1003 18:39:14.376629   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.376638   64909 logs.go:284] No container was found matching "etcd"
	I1003 18:39:14.376643   64909 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:39:14.376750   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:39:14.401480   64909 cri.go:89] found id: ""
	I1003 18:39:14.401504   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.401512   64909 logs.go:284] No container was found matching "coredns"
	I1003 18:39:14.401517   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:39:14.401582   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:39:14.426822   64909 cri.go:89] found id: ""
	I1003 18:39:14.426858   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.426871   64909 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:39:14.426879   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:39:14.426946   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:39:14.451679   64909 cri.go:89] found id: ""
	I1003 18:39:14.451710   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.451722   64909 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:39:14.451730   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:39:14.451787   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:39:14.477253   64909 cri.go:89] found id: ""
	I1003 18:39:14.477275   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.477282   64909 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:39:14.477288   64909 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:39:14.477332   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:39:14.501586   64909 cri.go:89] found id: ""
	I1003 18:39:14.501613   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.501621   64909 logs.go:284] No container was found matching "kindnet"
	I1003 18:39:14.501632   64909 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:39:14.501643   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:39:14.561285   64909 logs.go:123] Gathering logs for container status ...
	I1003 18:39:14.561318   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:39:14.589589   64909 logs.go:123] Gathering logs for kubelet ...
	I1003 18:39:14.589614   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:39:14.656775   64909 logs.go:123] Gathering logs for dmesg ...
	I1003 18:39:14.656809   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:39:14.668000   64909 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:39:14.668023   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:39:14.725446   64909 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:39:14.718419    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.718941    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720510    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720909    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.722416    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:39:14.718419    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.718941    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720510    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720909    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.722416    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1003 18:39:14.725478   64909 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	W1003 18:39:14.725530   64909 out.go:285] * 
	W1003 18:39:14.725612   64909 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:39:14.725629   64909 out.go:285] * 
	W1003 18:39:14.727399   64909 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:39:14.731087   64909 out.go:203] 
	W1003 18:39:14.732560   64909 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:39:14.732585   64909 out.go:285] * 
	I1003 18:39:14.734183   64909 out.go:203] 
	
	
	==> CRI-O <==
	Oct 03 18:40:42 ha-422561 crio[781]: time="2025-10-03T18:40:42.924226182Z" level=info msg="createCtr: removing container 88b5d6e917642dd6a17610e58e1d82ba0173c1bba59697739259e049f795496f" id=f813270a-3b77-4aee-ba4f-286ca5f3c68c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:42 ha-422561 crio[781]: time="2025-10-03T18:40:42.924256755Z" level=info msg="createCtr: deleting container 88b5d6e917642dd6a17610e58e1d82ba0173c1bba59697739259e049f795496f from storage" id=f813270a-3b77-4aee-ba4f-286ca5f3c68c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:42 ha-422561 crio[781]: time="2025-10-03T18:40:42.926130977Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-422561_kube-system_e643a03771f1e72f527532eff2c66a9c_0" id=f813270a-3b77-4aee-ba4f-286ca5f3c68c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:45 ha-422561 crio[781]: time="2025-10-03T18:40:45.896589105Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=6a4d82f0-368a-4e9c-8a68-613aeca5ca6d name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:40:45 ha-422561 crio[781]: time="2025-10-03T18:40:45.897504377Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=34b140e2-aed5-4daf-9de3-b9e65f8ce6db name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:40:45 ha-422561 crio[781]: time="2025-10-03T18:40:45.898312455Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-422561/kube-scheduler" id=c0ff24b2-5327-4713-afb5-961b59b98a21 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:45 ha-422561 crio[781]: time="2025-10-03T18:40:45.898543485Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:45 ha-422561 crio[781]: time="2025-10-03T18:40:45.90171654Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:45 ha-422561 crio[781]: time="2025-10-03T18:40:45.902172797Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:45 ha-422561 crio[781]: time="2025-10-03T18:40:45.918904384Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=c0ff24b2-5327-4713-afb5-961b59b98a21 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:45 ha-422561 crio[781]: time="2025-10-03T18:40:45.920218692Z" level=info msg="createCtr: deleting container ID b0526328ea3bcd02f6dad5a98d49dcec6de935893fc87cf4b7225f9aeb00c5f3 from idIndex" id=c0ff24b2-5327-4713-afb5-961b59b98a21 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:45 ha-422561 crio[781]: time="2025-10-03T18:40:45.920247849Z" level=info msg="createCtr: removing container b0526328ea3bcd02f6dad5a98d49dcec6de935893fc87cf4b7225f9aeb00c5f3" id=c0ff24b2-5327-4713-afb5-961b59b98a21 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:45 ha-422561 crio[781]: time="2025-10-03T18:40:45.920276478Z" level=info msg="createCtr: deleting container b0526328ea3bcd02f6dad5a98d49dcec6de935893fc87cf4b7225f9aeb00c5f3 from storage" id=c0ff24b2-5327-4713-afb5-961b59b98a21 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:45 ha-422561 crio[781]: time="2025-10-03T18:40:45.922174432Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-422561_kube-system_2640157afe5e174d7402164688eed7be_0" id=c0ff24b2-5327-4713-afb5-961b59b98a21 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.895784501Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=7701b0da-d602-45bd-b4be-827842374e9c name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.896698231Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=459c77c9-c9a1-4442-8efd-dba46c09a87b name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.897487621Z" level=info msg="Creating container: kube-system/etcd-ha-422561/etcd" id=8ee50b88-f594-4d65-81a3-5ff4b08ba0ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.897719618Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.902014628Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.902421286Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.918695418Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8ee50b88-f594-4d65-81a3-5ff4b08ba0ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.920064591Z" level=info msg="createCtr: deleting container ID 60eac4f05bb70cc097a023480fc9d2f45ed0628f63763a71867879f1fd5fa153 from idIndex" id=8ee50b88-f594-4d65-81a3-5ff4b08ba0ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.920098966Z" level=info msg="createCtr: removing container 60eac4f05bb70cc097a023480fc9d2f45ed0628f63763a71867879f1fd5fa153" id=8ee50b88-f594-4d65-81a3-5ff4b08ba0ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.920129084Z" level=info msg="createCtr: deleting container 60eac4f05bb70cc097a023480fc9d2f45ed0628f63763a71867879f1fd5fa153 from storage" id=8ee50b88-f594-4d65-81a3-5ff4b08ba0ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.922274937Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-422561_kube-system_6803106e6cb30e1b9b282ce29772fddf_0" id=8ee50b88-f594-4d65-81a3-5ff4b08ba0ee name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:40:52.363415    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:40:52.363927    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:40:52.365529    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:40:52.365958    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:40:52.367458    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 3 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001870] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.374530] i8042: Warning: Keylock active
	[  +0.010846] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003424] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000658] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000637] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000691] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.479345] block sda: the capability attribute has been deprecated.
	[  +0.086934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025583] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.992810] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:40:52 up  1:23,  0 user,  load average: 0.12, 0.08, 0.07
	Linux ha-422561 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 03 18:40:42 ha-422561 kubelet[1961]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-422561_kube-system(e643a03771f1e72f527532eff2c66a9c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:40:42 ha-422561 kubelet[1961]:  > logger="UnhandledError"
	Oct 03 18:40:42 ha-422561 kubelet[1961]: E1003 18:40:42.926532    1961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-422561" podUID="e643a03771f1e72f527532eff2c66a9c"
	Oct 03 18:40:43 ha-422561 kubelet[1961]: E1003 18:40:43.348285    1961 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-422561.186b0ef272ca351c  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-422561,UID:ha-422561,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-422561 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-422561,},FirstTimestamp:2025-10-03 18:35:13.889039644 +0000 UTC m=+0.583846472,LastTimestamp:2025-10-03 18:35:13.889039644 +0000 UTC m=+0.583846472,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-422561,}"
	Oct 03 18:40:43 ha-422561 kubelet[1961]: E1003 18:40:43.916095    1961 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-422561\" not found"
	Oct 03 18:40:45 ha-422561 kubelet[1961]: E1003 18:40:45.896130    1961 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:40:45 ha-422561 kubelet[1961]: E1003 18:40:45.922434    1961 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:40:45 ha-422561 kubelet[1961]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:40:45 ha-422561 kubelet[1961]:  > podSandboxID="a10975bd62b256134c3b4cd528b6d141353311ccb4309c6a5b3dea224dc6ecb8"
	Oct 03 18:40:45 ha-422561 kubelet[1961]: E1003 18:40:45.922540    1961 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:40:45 ha-422561 kubelet[1961]:         container kube-scheduler start failed in pod kube-scheduler-ha-422561_kube-system(2640157afe5e174d7402164688eed7be): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:40:45 ha-422561 kubelet[1961]:  > logger="UnhandledError"
	Oct 03 18:40:45 ha-422561 kubelet[1961]: E1003 18:40:45.922583    1961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-422561" podUID="2640157afe5e174d7402164688eed7be"
	Oct 03 18:40:46 ha-422561 kubelet[1961]: E1003 18:40:46.895363    1961 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:40:46 ha-422561 kubelet[1961]: E1003 18:40:46.922547    1961 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:40:46 ha-422561 kubelet[1961]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:40:46 ha-422561 kubelet[1961]:  > podSandboxID="d8c61f11856eaf647667c61ede204d0da4f897662d4f66aa1405fe26a28a98f5"
	Oct 03 18:40:46 ha-422561 kubelet[1961]: E1003 18:40:46.922663    1961 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:40:46 ha-422561 kubelet[1961]:         container etcd start failed in pod etcd-ha-422561_kube-system(6803106e6cb30e1b9b282ce29772fddf): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:40:46 ha-422561 kubelet[1961]:  > logger="UnhandledError"
	Oct 03 18:40:46 ha-422561 kubelet[1961]: E1003 18:40:46.922710    1961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-422561" podUID="6803106e6cb30e1b9b282ce29772fddf"
	Oct 03 18:40:48 ha-422561 kubelet[1961]: E1003 18:40:48.535582    1961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-422561?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 03 18:40:48 ha-422561 kubelet[1961]: I1003 18:40:48.695745    1961 kubelet_node_status.go:75] "Attempting to register node" node="ha-422561"
	Oct 03 18:40:48 ha-422561 kubelet[1961]: E1003 18:40:48.696172    1961 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-422561"
	Oct 03 18:40:49 ha-422561 kubelet[1961]: E1003 18:40:49.418760    1961 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561: exit status 6 (296.726534ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:40:52.730060   72162 status.go:458] kubeconfig endpoint: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-422561" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (1.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (1.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 node add --alsologtostderr -v 5: exit status 103 (251.462885ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-422561 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-422561"

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:40:52.797199   72290 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:40:52.797474   72290 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:40:52.797483   72290 out.go:374] Setting ErrFile to fd 2...
	I1003 18:40:52.797487   72290 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:40:52.797648   72290 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:40:52.797886   72290 mustload.go:65] Loading cluster: ha-422561
	I1003 18:40:52.798230   72290 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:40:52.798593   72290 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:40:52.815874   72290 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:40:52.816160   72290 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:40:52.867469   72290 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-03 18:40:52.857825791 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:40:52.867598   72290 api_server.go:166] Checking apiserver status ...
	I1003 18:40:52.867653   72290 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:40:52.867708   72290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:40:52.885242   72290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	W1003 18:40:52.988035   72290 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:40:52.990329   72290 out.go:179] * The control-plane node ha-422561 apiserver is not running: (state=Stopped)
	I1003 18:40:52.991534   72290 out.go:179]   To start a cluster, run: "minikube start -p ha-422561"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-linux-amd64 -p ha-422561 node add --alsologtostderr -v 5" : exit status 103
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-422561
helpers_test.go:243: (dbg) docker inspect ha-422561:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	        "Created": "2025-10-03T18:31:00.396132938Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 65481,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T18:31:00.428325646Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hostname",
	        "HostsPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hosts",
	        "LogPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512-json.log",
	        "Name": "/ha-422561",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-422561:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-422561",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	                "LowerDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94-init/diff:/var/lib/docker/overlay2/6a517a7375440eba803d7b83fe1e0821915758396dd4d8556ab64fff322a60c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-422561",
	                "Source": "/var/lib/docker/volumes/ha-422561/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-422561",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-422561",
	                "name.minikube.sigs.k8s.io": "ha-422561",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3084976d568ce061948ebe671f279a80502b1d28417f2be7c2497961eac2a5aa",
	            "SandboxKey": "/var/run/docker/netns/3084976d568c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-422561": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:e4:3c:eb:d3:38",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de6aa7ca29f453c0d15cb280abde7ee215f554c89e78e3db8a0f7590468114b5",
	                    "EndpointID": "1b961733d045b77a64efb8afa6caa273125f56ec888f823b790f5454f23ca3b7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-422561",
	                        "eef8fc426b2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561: exit status 6 (292.333728ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:40:53.293116   72398 status.go:458] kubeconfig endpoint: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/AddWorkerNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-889240 ssh pgrep buildkitd                                                                           │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ image   │ functional-889240 image ls --format json --alsologtostderr                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image   │ functional-889240 image ls --format table --alsologtostderr                                                     │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image   │ functional-889240 image build -t localhost/my-image:functional-889240 testdata/build --alsologtostderr          │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:27 UTC │
	│ image   │ functional-889240 image ls                                                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:27 UTC │ 03 Oct 25 18:27 UTC │
	│ delete  │ -p functional-889240                                                                                            │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:30 UTC │ 03 Oct 25 18:30 UTC │
	│ start   │ ha-422561 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:30 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- rollout status deployment/busybox                                                          │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node add --alsologtostderr -v 5                                                                       │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:30:55
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:30:55.351405   64909 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:30:55.351662   64909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:55.351671   64909 out.go:374] Setting ErrFile to fd 2...
	I1003 18:30:55.351675   64909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:55.351854   64909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:30:55.352339   64909 out.go:368] Setting JSON to false
	I1003 18:30:55.353203   64909 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4406,"bootTime":1759511849,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:30:55.353289   64909 start.go:140] virtualization: kvm guest
	I1003 18:30:55.355458   64909 out.go:179] * [ha-422561] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 18:30:55.356815   64909 notify.go:220] Checking for updates...
	I1003 18:30:55.356884   64909 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:30:55.358389   64909 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:30:55.359964   64909 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:30:55.361351   64909 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:30:55.362647   64909 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:30:55.363956   64909 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:30:55.365351   64909 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:30:55.387768   64909 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:30:55.387885   64909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:30:55.443407   64909 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:30:55.433728571 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:30:55.443516   64909 docker.go:318] overlay module found
	I1003 18:30:55.445440   64909 out.go:179] * Using the docker driver based on user configuration
	I1003 18:30:55.446777   64909 start.go:304] selected driver: docker
	I1003 18:30:55.446793   64909 start.go:924] validating driver "docker" against <nil>
	I1003 18:30:55.446808   64909 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:30:55.447403   64909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:30:55.498777   64909 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:30:55.489521827 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:30:55.498958   64909 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1003 18:30:55.499206   64909 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:30:55.501187   64909 out.go:179] * Using Docker driver with root privileges
	I1003 18:30:55.502312   64909 cni.go:84] Creating CNI manager for ""
	I1003 18:30:55.502386   64909 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1003 18:30:55.502397   64909 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 18:30:55.502459   64909 start.go:348] cluster config:
	{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1003 18:30:55.503779   64909 out.go:179] * Starting "ha-422561" primary control-plane node in "ha-422561" cluster
	I1003 18:30:55.504816   64909 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:30:55.506028   64909 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:30:55.507131   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:30:55.507167   64909 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 18:30:55.507169   64909 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:30:55.507175   64909 cache.go:58] Caching tarball of preloaded images
	I1003 18:30:55.507294   64909 preload.go:233] Found /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1003 18:30:55.507311   64909 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 18:30:55.507736   64909 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:30:55.507764   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json: {Name:mk1ece959bac74a473416f0dfc8af04a6136d7b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:30:55.527458   64909 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 18:30:55.527478   64909 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 18:30:55.527494   64909 cache.go:232] Successfully downloaded all kic artifacts
	I1003 18:30:55.527527   64909 start.go:360] acquireMachinesLock for ha-422561: {Name:mk32fd04a5d9b5f89831583bab7d7527f4d187a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:30:55.527631   64909 start.go:364] duration metric: took 81.336µs to acquireMachinesLock for "ha-422561"
	I1003 18:30:55.527657   64909 start.go:93] Provisioning new machine with config: &{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 18:30:55.527748   64909 start.go:125] createHost starting for "" (driver="docker")
	I1003 18:30:55.529663   64909 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1003 18:30:55.529898   64909 start.go:159] libmachine.API.Create for "ha-422561" (driver="docker")
	I1003 18:30:55.529933   64909 client.go:168] LocalClient.Create starting
	I1003 18:30:55.530028   64909 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem
	I1003 18:30:55.530072   64909 main.go:141] libmachine: Decoding PEM data...
	I1003 18:30:55.530097   64909 main.go:141] libmachine: Parsing certificate...
	I1003 18:30:55.530187   64909 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem
	I1003 18:30:55.530226   64909 main.go:141] libmachine: Decoding PEM data...
	I1003 18:30:55.530238   64909 main.go:141] libmachine: Parsing certificate...
	I1003 18:30:55.530612   64909 cli_runner.go:164] Run: docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 18:30:55.547068   64909 cli_runner.go:211] docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 18:30:55.547129   64909 network_create.go:284] running [docker network inspect ha-422561] to gather additional debugging logs...
	I1003 18:30:55.547146   64909 cli_runner.go:164] Run: docker network inspect ha-422561
	W1003 18:30:55.563141   64909 cli_runner.go:211] docker network inspect ha-422561 returned with exit code 1
	I1003 18:30:55.563167   64909 network_create.go:287] error running [docker network inspect ha-422561]: docker network inspect ha-422561: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-422561 not found
	I1003 18:30:55.563179   64909 network_create.go:289] output of [docker network inspect ha-422561]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-422561 not found
	
	** /stderr **
	I1003 18:30:55.563276   64909 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:30:55.579301   64909 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00157b3a0}
	I1003 18:30:55.579336   64909 network_create.go:124] attempt to create docker network ha-422561 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1003 18:30:55.579388   64909 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-422561 ha-422561
	I1003 18:30:55.634233   64909 network_create.go:108] docker network ha-422561 192.168.49.0/24 created
	I1003 18:30:55.634260   64909 kic.go:121] calculated static IP "192.168.49.2" for the "ha-422561" container
	I1003 18:30:55.634318   64909 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 18:30:55.649960   64909 cli_runner.go:164] Run: docker volume create ha-422561 --label name.minikube.sigs.k8s.io=ha-422561 --label created_by.minikube.sigs.k8s.io=true
	I1003 18:30:55.667186   64909 oci.go:103] Successfully created a docker volume ha-422561
	I1003 18:30:55.667250   64909 cli_runner.go:164] Run: docker run --rm --name ha-422561-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-422561 --entrypoint /usr/bin/test -v ha-422561:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1003 18:30:56.041615   64909 oci.go:107] Successfully prepared a docker volume ha-422561
	I1003 18:30:56.041648   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:30:56.041669   64909 kic.go:194] Starting extracting preloaded images to volume ...
	I1003 18:30:56.041727   64909 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-422561:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1003 18:31:00.326417   64909 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-422561:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.284654466s)
	I1003 18:31:00.326457   64909 kic.go:203] duration metric: took 4.284784967s to extract preloaded images to volume ...
	W1003 18:31:00.326567   64909 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1003 18:31:00.326610   64909 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1003 18:31:00.326657   64909 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1003 18:31:00.381592   64909 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-422561 --name ha-422561 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-422561 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-422561 --network ha-422561 --ip 192.168.49.2 --volume ha-422561:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1003 18:31:00.641348   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Running}}
	I1003 18:31:00.659876   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:00.678319   64909 cli_runner.go:164] Run: docker exec ha-422561 stat /var/lib/dpkg/alternatives/iptables
	I1003 18:31:00.728414   64909 oci.go:144] the created container "ha-422561" has a running status.
	I1003 18:31:00.728450   64909 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa...
	I1003 18:31:01.103610   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1003 18:31:01.103663   64909 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1003 18:31:01.128670   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:01.147200   64909 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1003 18:31:01.147218   64909 kic_runner.go:114] Args: [docker exec --privileged ha-422561 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1003 18:31:01.189023   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:01.207395   64909 machine.go:93] provisionDockerMachine start ...
	I1003 18:31:01.207497   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.226029   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.226282   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.226299   64909 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 18:31:01.372245   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:31:01.372275   64909 ubuntu.go:182] provisioning hostname "ha-422561"
	I1003 18:31:01.372335   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.390674   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.390889   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.390902   64909 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-422561 && echo "ha-422561" | sudo tee /etc/hostname
	I1003 18:31:01.544850   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:31:01.544932   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.563695   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.563966   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.564014   64909 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422561' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422561/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422561' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:31:01.708942   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:31:01.708971   64909 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8669/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8669/.minikube}
	I1003 18:31:01.709036   64909 ubuntu.go:190] setting up certificates
	I1003 18:31:01.709048   64909 provision.go:84] configureAuth start
	I1003 18:31:01.709101   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:01.727778   64909 provision.go:143] copyHostCerts
	I1003 18:31:01.727814   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:31:01.727849   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem, removing ...
	I1003 18:31:01.727858   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:31:01.727940   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem (1082 bytes)
	I1003 18:31:01.728054   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:31:01.728079   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem, removing ...
	I1003 18:31:01.728090   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:31:01.728137   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem (1123 bytes)
	I1003 18:31:01.728200   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:31:01.728225   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem, removing ...
	I1003 18:31:01.728234   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:31:01.728266   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem (1675 bytes)
	I1003 18:31:01.728336   64909 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem org=jenkins.ha-422561 san=[127.0.0.1 192.168.49.2 ha-422561 localhost minikube]
	I1003 18:31:01.864219   64909 provision.go:177] copyRemoteCerts
	I1003 18:31:01.864281   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:31:01.864317   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.882069   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:01.982800   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 18:31:01.982877   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1003 18:31:02.000887   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 18:31:02.000952   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 18:31:02.017591   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 18:31:02.017639   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 18:31:02.034172   64909 provision.go:87] duration metric: took 325.10989ms to configureAuth
	I1003 18:31:02.034202   64909 ubuntu.go:206] setting minikube options for container-runtime
	I1003 18:31:02.034393   64909 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:31:02.034508   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.052111   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:02.052326   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:02.052344   64909 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 18:31:02.295594   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 18:31:02.295629   64909 machine.go:96] duration metric: took 1.088207423s to provisionDockerMachine
	I1003 18:31:02.295640   64909 client.go:171] duration metric: took 6.765697238s to LocalClient.Create
	I1003 18:31:02.295660   64909 start.go:167] duration metric: took 6.765761646s to libmachine.API.Create "ha-422561"
	I1003 18:31:02.295669   64909 start.go:293] postStartSetup for "ha-422561" (driver="docker")
	I1003 18:31:02.295682   64909 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:31:02.295752   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:31:02.295789   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.312783   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.414720   64909 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:31:02.418127   64909 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:31:02.418149   64909 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 18:31:02.418159   64909 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/addons for local assets ...
	I1003 18:31:02.418213   64909 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/files for local assets ...
	I1003 18:31:02.418310   64909 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> 122122.pem in /etc/ssl/certs
	I1003 18:31:02.418326   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /etc/ssl/certs/122122.pem
	I1003 18:31:02.418453   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 18:31:02.425623   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:31:02.444405   64909 start.go:296] duration metric: took 148.722871ms for postStartSetup
	I1003 18:31:02.444748   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:02.462226   64909 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:31:02.462456   64909 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:31:02.462495   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.478737   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.575846   64909 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:31:02.580138   64909 start.go:128] duration metric: took 7.052376255s to createHost
	I1003 18:31:02.580160   64909 start.go:83] releasing machines lock for "ha-422561", held for 7.052515614s
	I1003 18:31:02.580230   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:02.596730   64909 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:31:02.596776   64909 ssh_runner.go:195] Run: cat /version.json
	I1003 18:31:02.596798   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.596817   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.613783   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.614183   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.764865   64909 ssh_runner.go:195] Run: systemctl --version
	I1003 18:31:02.771251   64909 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 18:31:02.803643   64909 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 18:31:02.807949   64909 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 18:31:02.808044   64909 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 18:31:02.833024   64909 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 18:31:02.833043   64909 start.go:495] detecting cgroup driver to use...
	I1003 18:31:02.833073   64909 detect.go:190] detected "systemd" cgroup driver on host os
	I1003 18:31:02.833108   64909 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 18:31:02.847613   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 18:31:02.858865   64909 docker.go:218] disabling cri-docker service (if available) ...
	I1003 18:31:02.858910   64909 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 18:31:02.874470   64909 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 18:31:02.890554   64909 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 18:31:02.970342   64909 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 18:31:03.055310   64909 docker.go:234] disabling docker service ...
	I1003 18:31:03.055369   64909 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 18:31:03.072668   64909 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 18:31:03.084308   64909 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 18:31:03.163959   64909 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 18:31:03.241930   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 18:31:03.253863   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:31:03.266905   64909 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 18:31:03.266971   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.276795   64909 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 18:31:03.276848   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.285157   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.293117   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.301070   64909 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:31:03.308489   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.316789   64909 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.329424   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.337651   64909 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:31:03.344839   64909 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:31:03.352026   64909 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:31:03.430894   64909 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 18:31:03.533915   64909 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 18:31:03.534002   64909 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 18:31:03.537783   64909 start.go:563] Will wait 60s for crictl version
	I1003 18:31:03.537838   64909 ssh_runner.go:195] Run: which crictl
	I1003 18:31:03.541393   64909 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 18:31:03.564883   64909 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 18:31:03.564963   64909 ssh_runner.go:195] Run: crio --version
	I1003 18:31:03.591363   64909 ssh_runner.go:195] Run: crio --version
	I1003 18:31:03.619425   64909 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 18:31:03.620466   64909 cli_runner.go:164] Run: docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:31:03.637151   64909 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 18:31:03.641184   64909 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:31:03.651292   64909 kubeadm.go:883] updating cluster {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 18:31:03.651379   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:31:03.651428   64909 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:31:03.680883   64909 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:31:03.680904   64909 crio.go:433] Images already preloaded, skipping extraction
	I1003 18:31:03.680955   64909 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:31:03.706829   64909 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:31:03.706859   64909 cache_images.go:85] Images are preloaded, skipping loading
	I1003 18:31:03.706866   64909 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1003 18:31:03.706953   64909 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422561 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 18:31:03.707032   64909 ssh_runner.go:195] Run: crio config
	I1003 18:31:03.751501   64909 cni.go:84] Creating CNI manager for ""
	I1003 18:31:03.751523   64909 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 18:31:03.751538   64909 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 18:31:03.751558   64909 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-422561 NodeName:ha-422561 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 18:31:03.751669   64909 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-422561"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:31:03.751691   64909 kube-vip.go:115] generating kube-vip config ...
	I1003 18:31:03.751728   64909 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1003 18:31:03.763009   64909 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:31:03.763125   64909 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1003 18:31:03.763181   64909 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 18:31:03.770585   64909 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:31:03.770633   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1003 18:31:03.778069   64909 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1003 18:31:03.790397   64909 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 18:31:03.805112   64909 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1003 18:31:03.817362   64909 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1003 18:31:03.830824   64909 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1003 18:31:03.834300   64909 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:31:03.843861   64909 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:31:03.921407   64909 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:31:03.944431   64909 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561 for IP: 192.168.49.2
	I1003 18:31:03.944451   64909 certs.go:195] generating shared ca certs ...
	I1003 18:31:03.944468   64909 certs.go:227] acquiring lock for ca certs: {Name:mk92d1e8e469cb44d9924ff8abf5ecf0a8ce4e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:03.944607   64909 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key
	I1003 18:31:03.944644   64909 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key
	I1003 18:31:03.944652   64909 certs.go:257] generating profile certs ...
	I1003 18:31:03.944708   64909 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key
	I1003 18:31:03.944722   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt with IP's: []
	I1003 18:31:04.171087   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt ...
	I1003 18:31:04.171118   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt: {Name:mked6cb0f731cbb630d2b187c4975015a458a284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.171291   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key ...
	I1003 18:31:04.171301   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key: {Name:mk0c9f0a0941d99f2af213cd316467f053532c99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.171391   64909 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905
	I1003 18:31:04.171406   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1003 18:31:04.383185   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 ...
	I1003 18:31:04.383218   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905: {Name:mkc24c55d4abb428b3559a93e6e301be2cab703a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.383381   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905 ...
	I1003 18:31:04.383394   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905: {Name:mk0576a73623089a3eecf4e34bbbd214545e2247 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.383486   64909 certs.go:382] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt
	I1003 18:31:04.383601   64909 certs.go:386] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905 -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key
	I1003 18:31:04.383674   64909 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key
	I1003 18:31:04.383689   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt with IP's: []
	I1003 18:31:04.628083   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt ...
	I1003 18:31:04.628112   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt: {Name:mkc19179c67a2559968759165df93d304eb42db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.628269   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key ...
	I1003 18:31:04.628279   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key: {Name:mka8b2392a3d721a70329b852837f3403643f948 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.628347   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 18:31:04.628364   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 18:31:04.628375   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 18:31:04.628384   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 18:31:04.628397   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 18:31:04.628410   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 18:31:04.628430   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 18:31:04.628442   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 18:31:04.628492   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem (1338 bytes)
	W1003 18:31:04.628525   64909 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212_empty.pem, impossibly tiny 0 bytes
	I1003 18:31:04.628535   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:31:04.628558   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:31:04.628580   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:31:04.628601   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem (1675 bytes)
	I1003 18:31:04.628637   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:31:04.628666   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem -> /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.628680   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.628692   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.629254   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:31:04.646879   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 18:31:04.663465   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:31:04.679837   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:31:04.695959   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 18:31:04.712689   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:31:04.729310   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:31:04.745587   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 18:31:04.761663   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem --> /usr/share/ca-certificates/12212.pem (1338 bytes)
	I1003 18:31:04.779546   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /usr/share/ca-certificates/122122.pem (1708 bytes)
	I1003 18:31:04.796119   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:31:04.813748   64909 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:31:04.826629   64909 ssh_runner.go:195] Run: openssl version
	I1003 18:31:04.832848   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122122.pem && ln -fs /usr/share/ca-certificates/122122.pem /etc/ssl/certs/122122.pem"
	I1003 18:31:04.840960   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.844465   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.844506   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.878276   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122122.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:31:04.886714   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:31:04.894672   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.898099   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.898154   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.931606   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:31:04.940357   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12212.pem && ln -fs /usr/share/ca-certificates/12212.pem /etc/ssl/certs/12212.pem"
	I1003 18:31:04.948454   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.952097   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.952148   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.985741   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12212.pem /etc/ssl/certs/51391683.0"
	I1003 18:31:04.994005   64909 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:31:04.997322   64909 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 18:31:04.997379   64909 kubeadm.go:400] StartCluster: {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:31:04.997476   64909 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:31:04.997539   64909 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:31:05.022530   64909 cri.go:89] found id: ""
	I1003 18:31:05.022595   64909 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:31:05.030329   64909 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 18:31:05.037782   64909 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:31:05.037841   64909 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:31:05.045127   64909 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:31:05.045142   64909 kubeadm.go:157] found existing configuration files:
	
	I1003 18:31:05.045174   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 18:31:05.052235   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:31:05.052286   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:31:05.059062   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 18:31:05.066034   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:31:05.066081   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:31:05.072912   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 18:31:05.079906   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:31:05.079966   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:31:05.086575   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 18:31:05.093500   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:31:05.093559   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:31:05.100246   64909 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:31:05.136174   64909 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:31:05.136254   64909 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:31:05.156320   64909 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:31:05.156407   64909 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 18:31:05.156462   64909 kubeadm.go:318] OS: Linux
	I1003 18:31:05.156539   64909 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:31:05.156610   64909 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:31:05.156705   64909 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:31:05.156790   64909 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:31:05.156865   64909 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:31:05.156939   64909 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:31:05.157035   64909 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:31:05.157127   64909 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 18:31:05.210250   64909 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:31:05.210408   64909 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:31:05.210566   64909 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:31:05.217643   64909 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:31:05.219725   64909 out.go:252]   - Generating certificates and keys ...
	I1003 18:31:05.219828   64909 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:31:05.219943   64909 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:31:05.398135   64909 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 18:31:05.511875   64909 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1003 18:31:05.863575   64909 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1003 18:31:06.044823   64909 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1003 18:31:06.083505   64909 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1003 18:31:06.083616   64909 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 18:31:06.181464   64909 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1003 18:31:06.181591   64909 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 18:31:06.345813   64909 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 18:31:06.565989   64909 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 18:31:06.759809   64909 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1003 18:31:06.759892   64909 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:31:06.883072   64909 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:31:07.211268   64909 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:31:07.403076   64909 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:31:07.687412   64909 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:31:08.052476   64909 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:31:08.052957   64909 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:31:08.054984   64909 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:31:08.056889   64909 out.go:252]   - Booting up control plane ...
	I1003 18:31:08.056984   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:31:08.057047   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:31:08.057102   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:31:08.069846   64909 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:31:08.069954   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:31:08.077490   64909 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:31:08.077826   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:31:08.077870   64909 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:31:08.170750   64909 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:31:08.170893   64909 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:31:09.172507   64909 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001794723s
	I1003 18:31:09.175233   64909 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:31:09.175335   64909 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1003 18:31:09.175418   64909 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:31:09.175496   64909 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:35:09.177158   64909 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001064557s
	I1003 18:35:09.177466   64909 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001283425s
	I1003 18:35:09.177673   64909 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00125879s
	I1003 18:35:09.177731   64909 kubeadm.go:318] 
	I1003 18:35:09.177887   64909 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 18:35:09.178114   64909 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:35:09.178320   64909 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 18:35:09.178580   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 18:35:09.178818   64909 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 18:35:09.179017   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 18:35:09.179033   64909 kubeadm.go:318] 
	I1003 18:35:09.182028   64909 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 18:35:09.182304   64909 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:35:09.182918   64909 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1003 18:35:09.183015   64909 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1003 18:35:09.183174   64909 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001794723s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001064557s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001283425s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00125879s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1003 18:35:09.183243   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1003 18:35:11.953646   64909 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.770379999s)
	I1003 18:35:11.953721   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:35:11.965876   64909 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:35:11.965928   64909 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:35:11.973363   64909 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:35:11.973382   64909 kubeadm.go:157] found existing configuration files:
	
	I1003 18:35:11.973419   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 18:35:11.980752   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:35:11.980806   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:35:11.987857   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 18:35:11.995081   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:35:11.995127   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:35:12.001778   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 18:35:12.009063   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:35:12.009126   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:35:12.015927   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 18:35:12.022875   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:35:12.022943   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:35:12.029549   64909 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:35:12.082477   64909 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 18:35:12.138594   64909 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:39:14.312592   64909 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	I1003 18:39:14.312818   64909 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1003 18:39:14.315914   64909 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:39:14.315992   64909 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:39:14.316115   64909 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:39:14.316166   64909 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 18:39:14.316250   64909 kubeadm.go:318] OS: Linux
	I1003 18:39:14.316328   64909 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:39:14.316401   64909 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:39:14.316475   64909 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:39:14.316553   64909 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:39:14.316624   64909 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:39:14.316701   64909 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:39:14.316751   64909 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:39:14.316825   64909 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 18:39:14.316936   64909 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:39:14.317123   64909 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:39:14.317262   64909 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:39:14.317314   64909 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:39:14.319872   64909 out.go:252]   - Generating certificates and keys ...
	I1003 18:39:14.319940   64909 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:39:14.320033   64909 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:39:14.320122   64909 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1003 18:39:14.320186   64909 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1003 18:39:14.320253   64909 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1003 18:39:14.320299   64909 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1003 18:39:14.320350   64909 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1003 18:39:14.320420   64909 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1003 18:39:14.320509   64909 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1003 18:39:14.320604   64909 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1003 18:39:14.320671   64909 kubeadm.go:318] [certs] Using the existing "sa" key
	I1003 18:39:14.320751   64909 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:39:14.320828   64909 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:39:14.320904   64909 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:39:14.321006   64909 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:39:14.321096   64909 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:39:14.321174   64909 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:39:14.321279   64909 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:39:14.321373   64909 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:39:14.322793   64909 out.go:252]   - Booting up control plane ...
	I1003 18:39:14.322884   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:39:14.323004   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:39:14.323072   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:39:14.323162   64909 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:39:14.323237   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:39:14.323335   64909 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:39:14.323415   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:39:14.323456   64909 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:39:14.323557   64909 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:39:14.323652   64909 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:39:14.323702   64909 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001540709s
	I1003 18:39:14.323792   64909 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:39:14.323860   64909 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1003 18:39:14.323946   64909 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:39:14.324043   64909 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:39:14.324124   64909 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	I1003 18:39:14.324186   64909 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	I1003 18:39:14.324248   64909 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	I1003 18:39:14.324258   64909 kubeadm.go:318] 
	I1003 18:39:14.324352   64909 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 18:39:14.324439   64909 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:39:14.324519   64909 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 18:39:14.324595   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 18:39:14.324687   64909 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 18:39:14.324773   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 18:39:14.324799   64909 kubeadm.go:318] 
	I1003 18:39:14.324836   64909 kubeadm.go:402] duration metric: took 8m9.327461574s to StartCluster
	I1003 18:39:14.324877   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:39:14.324935   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:39:14.352551   64909 cri.go:89] found id: ""
	I1003 18:39:14.352594   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.352608   64909 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:39:14.352617   64909 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:39:14.352684   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:39:14.376604   64909 cri.go:89] found id: ""
	I1003 18:39:14.376629   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.376638   64909 logs.go:284] No container was found matching "etcd"
	I1003 18:39:14.376643   64909 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:39:14.376750   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:39:14.401480   64909 cri.go:89] found id: ""
	I1003 18:39:14.401504   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.401512   64909 logs.go:284] No container was found matching "coredns"
	I1003 18:39:14.401517   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:39:14.401582   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:39:14.426822   64909 cri.go:89] found id: ""
	I1003 18:39:14.426858   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.426871   64909 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:39:14.426879   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:39:14.426946   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:39:14.451679   64909 cri.go:89] found id: ""
	I1003 18:39:14.451710   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.451722   64909 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:39:14.451730   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:39:14.451787   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:39:14.477253   64909 cri.go:89] found id: ""
	I1003 18:39:14.477275   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.477282   64909 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:39:14.477288   64909 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:39:14.477332   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:39:14.501586   64909 cri.go:89] found id: ""
	I1003 18:39:14.501613   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.501621   64909 logs.go:284] No container was found matching "kindnet"
	I1003 18:39:14.501632   64909 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:39:14.501643   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:39:14.561285   64909 logs.go:123] Gathering logs for container status ...
	I1003 18:39:14.561318   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:39:14.589589   64909 logs.go:123] Gathering logs for kubelet ...
	I1003 18:39:14.589614   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:39:14.656775   64909 logs.go:123] Gathering logs for dmesg ...
	I1003 18:39:14.656809   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:39:14.668000   64909 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:39:14.668023   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:39:14.725446   64909 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:39:14.718419    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.718941    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720510    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720909    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.722416    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:39:14.718419    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.718941    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720510    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720909    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.722416    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1003 18:39:14.725478   64909 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	W1003 18:39:14.725530   64909 out.go:285] * 
	W1003 18:39:14.725612   64909 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:39:14.725629   64909 out.go:285] * 
	W1003 18:39:14.727399   64909 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:39:14.731087   64909 out.go:203] 
	W1003 18:39:14.732560   64909 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:39:14.732585   64909 out.go:285] * 
	I1003 18:39:14.734183   64909 out.go:203] 
	
	
	==> CRI-O <==
	Oct 03 18:40:42 ha-422561 crio[781]: time="2025-10-03T18:40:42.924226182Z" level=info msg="createCtr: removing container 88b5d6e917642dd6a17610e58e1d82ba0173c1bba59697739259e049f795496f" id=f813270a-3b77-4aee-ba4f-286ca5f3c68c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:42 ha-422561 crio[781]: time="2025-10-03T18:40:42.924256755Z" level=info msg="createCtr: deleting container 88b5d6e917642dd6a17610e58e1d82ba0173c1bba59697739259e049f795496f from storage" id=f813270a-3b77-4aee-ba4f-286ca5f3c68c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:42 ha-422561 crio[781]: time="2025-10-03T18:40:42.926130977Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-422561_kube-system_e643a03771f1e72f527532eff2c66a9c_0" id=f813270a-3b77-4aee-ba4f-286ca5f3c68c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:45 ha-422561 crio[781]: time="2025-10-03T18:40:45.896589105Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=6a4d82f0-368a-4e9c-8a68-613aeca5ca6d name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:40:45 ha-422561 crio[781]: time="2025-10-03T18:40:45.897504377Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=34b140e2-aed5-4daf-9de3-b9e65f8ce6db name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:40:45 ha-422561 crio[781]: time="2025-10-03T18:40:45.898312455Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-422561/kube-scheduler" id=c0ff24b2-5327-4713-afb5-961b59b98a21 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:45 ha-422561 crio[781]: time="2025-10-03T18:40:45.898543485Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:45 ha-422561 crio[781]: time="2025-10-03T18:40:45.90171654Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:45 ha-422561 crio[781]: time="2025-10-03T18:40:45.902172797Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:45 ha-422561 crio[781]: time="2025-10-03T18:40:45.918904384Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=c0ff24b2-5327-4713-afb5-961b59b98a21 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:45 ha-422561 crio[781]: time="2025-10-03T18:40:45.920218692Z" level=info msg="createCtr: deleting container ID b0526328ea3bcd02f6dad5a98d49dcec6de935893fc87cf4b7225f9aeb00c5f3 from idIndex" id=c0ff24b2-5327-4713-afb5-961b59b98a21 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:45 ha-422561 crio[781]: time="2025-10-03T18:40:45.920247849Z" level=info msg="createCtr: removing container b0526328ea3bcd02f6dad5a98d49dcec6de935893fc87cf4b7225f9aeb00c5f3" id=c0ff24b2-5327-4713-afb5-961b59b98a21 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:45 ha-422561 crio[781]: time="2025-10-03T18:40:45.920276478Z" level=info msg="createCtr: deleting container b0526328ea3bcd02f6dad5a98d49dcec6de935893fc87cf4b7225f9aeb00c5f3 from storage" id=c0ff24b2-5327-4713-afb5-961b59b98a21 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:45 ha-422561 crio[781]: time="2025-10-03T18:40:45.922174432Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-422561_kube-system_2640157afe5e174d7402164688eed7be_0" id=c0ff24b2-5327-4713-afb5-961b59b98a21 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.895784501Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=7701b0da-d602-45bd-b4be-827842374e9c name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.896698231Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=459c77c9-c9a1-4442-8efd-dba46c09a87b name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.897487621Z" level=info msg="Creating container: kube-system/etcd-ha-422561/etcd" id=8ee50b88-f594-4d65-81a3-5ff4b08ba0ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.897719618Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.902014628Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.902421286Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.918695418Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8ee50b88-f594-4d65-81a3-5ff4b08ba0ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.920064591Z" level=info msg="createCtr: deleting container ID 60eac4f05bb70cc097a023480fc9d2f45ed0628f63763a71867879f1fd5fa153 from idIndex" id=8ee50b88-f594-4d65-81a3-5ff4b08ba0ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.920098966Z" level=info msg="createCtr: removing container 60eac4f05bb70cc097a023480fc9d2f45ed0628f63763a71867879f1fd5fa153" id=8ee50b88-f594-4d65-81a3-5ff4b08ba0ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.920129084Z" level=info msg="createCtr: deleting container 60eac4f05bb70cc097a023480fc9d2f45ed0628f63763a71867879f1fd5fa153 from storage" id=8ee50b88-f594-4d65-81a3-5ff4b08ba0ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.922274937Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-422561_kube-system_6803106e6cb30e1b9b282ce29772fddf_0" id=8ee50b88-f594-4d65-81a3-5ff4b08ba0ee name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:40:53.864943    3378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:40:53.865496    3378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:40:53.867059    3378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:40:53.867519    3378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:40:53.868922    3378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 3 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001870] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.374530] i8042: Warning: Keylock active
	[  +0.010846] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003424] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000658] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000637] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000691] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.479345] block sda: the capability attribute has been deprecated.
	[  +0.086934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025583] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.992810] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:40:53 up  1:23,  0 user,  load average: 0.12, 0.08, 0.07
	Linux ha-422561 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 03 18:40:43 ha-422561 kubelet[1961]: E1003 18:40:43.916095    1961 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-422561\" not found"
	Oct 03 18:40:45 ha-422561 kubelet[1961]: E1003 18:40:45.896130    1961 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:40:45 ha-422561 kubelet[1961]: E1003 18:40:45.922434    1961 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:40:45 ha-422561 kubelet[1961]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:40:45 ha-422561 kubelet[1961]:  > podSandboxID="a10975bd62b256134c3b4cd528b6d141353311ccb4309c6a5b3dea224dc6ecb8"
	Oct 03 18:40:45 ha-422561 kubelet[1961]: E1003 18:40:45.922540    1961 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:40:45 ha-422561 kubelet[1961]:         container kube-scheduler start failed in pod kube-scheduler-ha-422561_kube-system(2640157afe5e174d7402164688eed7be): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:40:45 ha-422561 kubelet[1961]:  > logger="UnhandledError"
	Oct 03 18:40:45 ha-422561 kubelet[1961]: E1003 18:40:45.922583    1961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-422561" podUID="2640157afe5e174d7402164688eed7be"
	Oct 03 18:40:46 ha-422561 kubelet[1961]: E1003 18:40:46.895363    1961 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:40:46 ha-422561 kubelet[1961]: E1003 18:40:46.922547    1961 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:40:46 ha-422561 kubelet[1961]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:40:46 ha-422561 kubelet[1961]:  > podSandboxID="d8c61f11856eaf647667c61ede204d0da4f897662d4f66aa1405fe26a28a98f5"
	Oct 03 18:40:46 ha-422561 kubelet[1961]: E1003 18:40:46.922663    1961 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:40:46 ha-422561 kubelet[1961]:         container etcd start failed in pod etcd-ha-422561_kube-system(6803106e6cb30e1b9b282ce29772fddf): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:40:46 ha-422561 kubelet[1961]:  > logger="UnhandledError"
	Oct 03 18:40:46 ha-422561 kubelet[1961]: E1003 18:40:46.922710    1961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-422561" podUID="6803106e6cb30e1b9b282ce29772fddf"
	Oct 03 18:40:48 ha-422561 kubelet[1961]: E1003 18:40:48.535582    1961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-422561?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 03 18:40:48 ha-422561 kubelet[1961]: I1003 18:40:48.695745    1961 kubelet_node_status.go:75] "Attempting to register node" node="ha-422561"
	Oct 03 18:40:48 ha-422561 kubelet[1961]: E1003 18:40:48.696172    1961 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-422561"
	Oct 03 18:40:49 ha-422561 kubelet[1961]: E1003 18:40:49.418760    1961 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.349401    1961 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-422561.186b0ef272ca351c  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-422561,UID:ha-422561,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-422561 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-422561,},FirstTimestamp:2025-10-03 18:35:13.889039644 +0000 UTC m=+0.583846472,LastTimestamp:2025-10-03 18:35:13.889039644 +0000 UTC m=+0.583846472,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-422561,}"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.895596    1961 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.895738    1961 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.916294    1961 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-422561\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561: exit status 6 (291.067816ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:40:54.238924   72733 status.go:458] kubeconfig endpoint: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-422561" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (1.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (1.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-422561 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-422561 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (44.218553ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-422561

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-422561 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-422561 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/NodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/NodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-422561
helpers_test.go:243: (dbg) docker inspect ha-422561:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	        "Created": "2025-10-03T18:31:00.396132938Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 65481,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T18:31:00.428325646Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hostname",
	        "HostsPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hosts",
	        "LogPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512-json.log",
	        "Name": "/ha-422561",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-422561:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-422561",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	                "LowerDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94-init/diff:/var/lib/docker/overlay2/6a517a7375440eba803d7b83fe1e0821915758396dd4d8556ab64fff322a60c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-422561",
	                "Source": "/var/lib/docker/volumes/ha-422561/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-422561",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-422561",
	                "name.minikube.sigs.k8s.io": "ha-422561",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3084976d568ce061948ebe671f279a80502b1d28417f2be7c2497961eac2a5aa",
	            "SandboxKey": "/var/run/docker/netns/3084976d568c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-422561": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:e4:3c:eb:d3:38",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de6aa7ca29f453c0d15cb280abde7ee215f554c89e78e3db8a0f7590468114b5",
	                    "EndpointID": "1b961733d045b77a64efb8afa6caa273125f56ec888f823b790f5454f23ca3b7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-422561",
	                        "eef8fc426b2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561: exit status 6 (289.687263ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:40:54.592224   72868 status.go:458] kubeconfig endpoint: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/NodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-889240 ssh pgrep buildkitd                                                                           │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ image   │ functional-889240 image ls --format json --alsologtostderr                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image   │ functional-889240 image ls --format table --alsologtostderr                                                     │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image   │ functional-889240 image build -t localhost/my-image:functional-889240 testdata/build --alsologtostderr          │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:27 UTC │
	│ image   │ functional-889240 image ls                                                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:27 UTC │ 03 Oct 25 18:27 UTC │
	│ delete  │ -p functional-889240                                                                                            │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:30 UTC │ 03 Oct 25 18:30 UTC │
	│ start   │ ha-422561 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:30 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- rollout status deployment/busybox                                                          │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node add --alsologtostderr -v 5                                                                       │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:30:55
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:30:55.351405   64909 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:30:55.351662   64909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:55.351671   64909 out.go:374] Setting ErrFile to fd 2...
	I1003 18:30:55.351675   64909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:55.351854   64909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:30:55.352339   64909 out.go:368] Setting JSON to false
	I1003 18:30:55.353203   64909 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4406,"bootTime":1759511849,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:30:55.353289   64909 start.go:140] virtualization: kvm guest
	I1003 18:30:55.355458   64909 out.go:179] * [ha-422561] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 18:30:55.356815   64909 notify.go:220] Checking for updates...
	I1003 18:30:55.356884   64909 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:30:55.358389   64909 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:30:55.359964   64909 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:30:55.361351   64909 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:30:55.362647   64909 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:30:55.363956   64909 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:30:55.365351   64909 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:30:55.387768   64909 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:30:55.387885   64909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:30:55.443407   64909 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:30:55.433728571 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:30:55.443516   64909 docker.go:318] overlay module found
	I1003 18:30:55.445440   64909 out.go:179] * Using the docker driver based on user configuration
	I1003 18:30:55.446777   64909 start.go:304] selected driver: docker
	I1003 18:30:55.446793   64909 start.go:924] validating driver "docker" against <nil>
	I1003 18:30:55.446808   64909 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:30:55.447403   64909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:30:55.498777   64909 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:30:55.489521827 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:30:55.498958   64909 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1003 18:30:55.499206   64909 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:30:55.501187   64909 out.go:179] * Using Docker driver with root privileges
	I1003 18:30:55.502312   64909 cni.go:84] Creating CNI manager for ""
	I1003 18:30:55.502386   64909 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1003 18:30:55.502397   64909 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 18:30:55.502459   64909 start.go:348] cluster config:
	{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1003 18:30:55.503779   64909 out.go:179] * Starting "ha-422561" primary control-plane node in "ha-422561" cluster
	I1003 18:30:55.504816   64909 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:30:55.506028   64909 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:30:55.507131   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:30:55.507167   64909 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 18:30:55.507169   64909 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:30:55.507175   64909 cache.go:58] Caching tarball of preloaded images
	I1003 18:30:55.507294   64909 preload.go:233] Found /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1003 18:30:55.507311   64909 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 18:30:55.507736   64909 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:30:55.507764   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json: {Name:mk1ece959bac74a473416f0dfc8af04a6136d7b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:30:55.527458   64909 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 18:30:55.527478   64909 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 18:30:55.527494   64909 cache.go:232] Successfully downloaded all kic artifacts
	I1003 18:30:55.527527   64909 start.go:360] acquireMachinesLock for ha-422561: {Name:mk32fd04a5d9b5f89831583bab7d7527f4d187a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:30:55.527631   64909 start.go:364] duration metric: took 81.336µs to acquireMachinesLock for "ha-422561"
	I1003 18:30:55.527657   64909 start.go:93] Provisioning new machine with config: &{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 18:30:55.527748   64909 start.go:125] createHost starting for "" (driver="docker")
	I1003 18:30:55.529663   64909 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1003 18:30:55.529898   64909 start.go:159] libmachine.API.Create for "ha-422561" (driver="docker")
	I1003 18:30:55.529933   64909 client.go:168] LocalClient.Create starting
	I1003 18:30:55.530028   64909 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem
	I1003 18:30:55.530072   64909 main.go:141] libmachine: Decoding PEM data...
	I1003 18:30:55.530097   64909 main.go:141] libmachine: Parsing certificate...
	I1003 18:30:55.530187   64909 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem
	I1003 18:30:55.530226   64909 main.go:141] libmachine: Decoding PEM data...
	I1003 18:30:55.530238   64909 main.go:141] libmachine: Parsing certificate...
	I1003 18:30:55.530612   64909 cli_runner.go:164] Run: docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 18:30:55.547068   64909 cli_runner.go:211] docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 18:30:55.547129   64909 network_create.go:284] running [docker network inspect ha-422561] to gather additional debugging logs...
	I1003 18:30:55.547146   64909 cli_runner.go:164] Run: docker network inspect ha-422561
	W1003 18:30:55.563141   64909 cli_runner.go:211] docker network inspect ha-422561 returned with exit code 1
	I1003 18:30:55.563167   64909 network_create.go:287] error running [docker network inspect ha-422561]: docker network inspect ha-422561: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-422561 not found
	I1003 18:30:55.563179   64909 network_create.go:289] output of [docker network inspect ha-422561]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-422561 not found
	
	** /stderr **
	I1003 18:30:55.563276   64909 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:30:55.579301   64909 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00157b3a0}
	I1003 18:30:55.579336   64909 network_create.go:124] attempt to create docker network ha-422561 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1003 18:30:55.579388   64909 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-422561 ha-422561
	I1003 18:30:55.634233   64909 network_create.go:108] docker network ha-422561 192.168.49.0/24 created
	I1003 18:30:55.634260   64909 kic.go:121] calculated static IP "192.168.49.2" for the "ha-422561" container
	I1003 18:30:55.634318   64909 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 18:30:55.649960   64909 cli_runner.go:164] Run: docker volume create ha-422561 --label name.minikube.sigs.k8s.io=ha-422561 --label created_by.minikube.sigs.k8s.io=true
	I1003 18:30:55.667186   64909 oci.go:103] Successfully created a docker volume ha-422561
	I1003 18:30:55.667250   64909 cli_runner.go:164] Run: docker run --rm --name ha-422561-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-422561 --entrypoint /usr/bin/test -v ha-422561:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1003 18:30:56.041615   64909 oci.go:107] Successfully prepared a docker volume ha-422561
	I1003 18:30:56.041648   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:30:56.041669   64909 kic.go:194] Starting extracting preloaded images to volume ...
	I1003 18:30:56.041727   64909 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-422561:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1003 18:31:00.326417   64909 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-422561:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.284654466s)
	I1003 18:31:00.326457   64909 kic.go:203] duration metric: took 4.284784967s to extract preloaded images to volume ...
	W1003 18:31:00.326567   64909 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1003 18:31:00.326610   64909 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1003 18:31:00.326657   64909 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1003 18:31:00.381592   64909 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-422561 --name ha-422561 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-422561 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-422561 --network ha-422561 --ip 192.168.49.2 --volume ha-422561:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1003 18:31:00.641348   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Running}}
	I1003 18:31:00.659876   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:00.678319   64909 cli_runner.go:164] Run: docker exec ha-422561 stat /var/lib/dpkg/alternatives/iptables
	I1003 18:31:00.728414   64909 oci.go:144] the created container "ha-422561" has a running status.
	I1003 18:31:00.728450   64909 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa...
	I1003 18:31:01.103610   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1003 18:31:01.103663   64909 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1003 18:31:01.128670   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:01.147200   64909 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1003 18:31:01.147218   64909 kic_runner.go:114] Args: [docker exec --privileged ha-422561 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1003 18:31:01.189023   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:01.207395   64909 machine.go:93] provisionDockerMachine start ...
	I1003 18:31:01.207497   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.226029   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.226282   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.226299   64909 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 18:31:01.372245   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:31:01.372275   64909 ubuntu.go:182] provisioning hostname "ha-422561"
	I1003 18:31:01.372335   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.390674   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.390889   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.390902   64909 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-422561 && echo "ha-422561" | sudo tee /etc/hostname
	I1003 18:31:01.544850   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:31:01.544932   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.563695   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.563966   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.564014   64909 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422561' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422561/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422561' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:31:01.708942   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:31:01.708971   64909 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8669/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8669/.minikube}
	I1003 18:31:01.709036   64909 ubuntu.go:190] setting up certificates
	I1003 18:31:01.709048   64909 provision.go:84] configureAuth start
	I1003 18:31:01.709101   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:01.727778   64909 provision.go:143] copyHostCerts
	I1003 18:31:01.727814   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:31:01.727849   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem, removing ...
	I1003 18:31:01.727858   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:31:01.727940   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem (1082 bytes)
	I1003 18:31:01.728054   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:31:01.728079   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem, removing ...
	I1003 18:31:01.728090   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:31:01.728137   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem (1123 bytes)
	I1003 18:31:01.728200   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:31:01.728225   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem, removing ...
	I1003 18:31:01.728234   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:31:01.728266   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem (1675 bytes)
	I1003 18:31:01.728336   64909 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem org=jenkins.ha-422561 san=[127.0.0.1 192.168.49.2 ha-422561 localhost minikube]
	I1003 18:31:01.864219   64909 provision.go:177] copyRemoteCerts
	I1003 18:31:01.864281   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:31:01.864317   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.882069   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:01.982800   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 18:31:01.982877   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1003 18:31:02.000887   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 18:31:02.000952   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 18:31:02.017591   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 18:31:02.017639   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 18:31:02.034172   64909 provision.go:87] duration metric: took 325.10989ms to configureAuth
	I1003 18:31:02.034202   64909 ubuntu.go:206] setting minikube options for container-runtime
	I1003 18:31:02.034393   64909 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:31:02.034508   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.052111   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:02.052326   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:02.052344   64909 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 18:31:02.295594   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 18:31:02.295629   64909 machine.go:96] duration metric: took 1.088207423s to provisionDockerMachine
	I1003 18:31:02.295640   64909 client.go:171] duration metric: took 6.765697238s to LocalClient.Create
	I1003 18:31:02.295660   64909 start.go:167] duration metric: took 6.765761646s to libmachine.API.Create "ha-422561"
	I1003 18:31:02.295669   64909 start.go:293] postStartSetup for "ha-422561" (driver="docker")
	I1003 18:31:02.295682   64909 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:31:02.295752   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:31:02.295789   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.312783   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.414720   64909 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:31:02.418127   64909 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:31:02.418149   64909 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 18:31:02.418159   64909 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/addons for local assets ...
	I1003 18:31:02.418213   64909 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/files for local assets ...
	I1003 18:31:02.418310   64909 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> 122122.pem in /etc/ssl/certs
	I1003 18:31:02.418326   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /etc/ssl/certs/122122.pem
	I1003 18:31:02.418453   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 18:31:02.425623   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:31:02.444405   64909 start.go:296] duration metric: took 148.722871ms for postStartSetup
	I1003 18:31:02.444748   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:02.462226   64909 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:31:02.462456   64909 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:31:02.462495   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.478737   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.575846   64909 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:31:02.580138   64909 start.go:128] duration metric: took 7.052376255s to createHost
	I1003 18:31:02.580160   64909 start.go:83] releasing machines lock for "ha-422561", held for 7.052515614s
	I1003 18:31:02.580230   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:02.596730   64909 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:31:02.596776   64909 ssh_runner.go:195] Run: cat /version.json
	I1003 18:31:02.596798   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.596817   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.613783   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.614183   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.764865   64909 ssh_runner.go:195] Run: systemctl --version
	I1003 18:31:02.771251   64909 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 18:31:02.803643   64909 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 18:31:02.807949   64909 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 18:31:02.808044   64909 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 18:31:02.833024   64909 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 18:31:02.833043   64909 start.go:495] detecting cgroup driver to use...
	I1003 18:31:02.833073   64909 detect.go:190] detected "systemd" cgroup driver on host os
	I1003 18:31:02.833108   64909 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 18:31:02.847613   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 18:31:02.858865   64909 docker.go:218] disabling cri-docker service (if available) ...
	I1003 18:31:02.858910   64909 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 18:31:02.874470   64909 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 18:31:02.890554   64909 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 18:31:02.970342   64909 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 18:31:03.055310   64909 docker.go:234] disabling docker service ...
	I1003 18:31:03.055369   64909 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 18:31:03.072668   64909 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 18:31:03.084308   64909 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 18:31:03.163959   64909 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 18:31:03.241930   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 18:31:03.253863   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:31:03.266905   64909 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 18:31:03.266971   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.276795   64909 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 18:31:03.276848   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.285157   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.293117   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.301070   64909 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:31:03.308489   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.316789   64909 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.329424   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.337651   64909 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:31:03.344839   64909 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:31:03.352026   64909 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:31:03.430894   64909 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 18:31:03.533915   64909 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 18:31:03.534002   64909 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 18:31:03.537783   64909 start.go:563] Will wait 60s for crictl version
	I1003 18:31:03.537838   64909 ssh_runner.go:195] Run: which crictl
	I1003 18:31:03.541393   64909 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 18:31:03.564883   64909 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 18:31:03.564963   64909 ssh_runner.go:195] Run: crio --version
	I1003 18:31:03.591363   64909 ssh_runner.go:195] Run: crio --version
	I1003 18:31:03.619425   64909 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 18:31:03.620466   64909 cli_runner.go:164] Run: docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:31:03.637151   64909 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 18:31:03.641184   64909 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:31:03.651292   64909 kubeadm.go:883] updating cluster {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 18:31:03.651379   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:31:03.651428   64909 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:31:03.680883   64909 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:31:03.680904   64909 crio.go:433] Images already preloaded, skipping extraction
	I1003 18:31:03.680955   64909 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:31:03.706829   64909 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:31:03.706859   64909 cache_images.go:85] Images are preloaded, skipping loading
	I1003 18:31:03.706866   64909 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1003 18:31:03.706953   64909 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422561 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 18:31:03.707032   64909 ssh_runner.go:195] Run: crio config
	I1003 18:31:03.751501   64909 cni.go:84] Creating CNI manager for ""
	I1003 18:31:03.751523   64909 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 18:31:03.751538   64909 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 18:31:03.751558   64909 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-422561 NodeName:ha-422561 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 18:31:03.751669   64909 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-422561"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:31:03.751691   64909 kube-vip.go:115] generating kube-vip config ...
	I1003 18:31:03.751728   64909 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1003 18:31:03.763009   64909 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:31:03.763125   64909 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1003 18:31:03.763181   64909 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 18:31:03.770585   64909 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:31:03.770633   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1003 18:31:03.778069   64909 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1003 18:31:03.790397   64909 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 18:31:03.805112   64909 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1003 18:31:03.817362   64909 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1003 18:31:03.830824   64909 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1003 18:31:03.834300   64909 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:31:03.843861   64909 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:31:03.921407   64909 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:31:03.944431   64909 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561 for IP: 192.168.49.2
	I1003 18:31:03.944451   64909 certs.go:195] generating shared ca certs ...
	I1003 18:31:03.944468   64909 certs.go:227] acquiring lock for ca certs: {Name:mk92d1e8e469cb44d9924ff8abf5ecf0a8ce4e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:03.944607   64909 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key
	I1003 18:31:03.944644   64909 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key
	I1003 18:31:03.944652   64909 certs.go:257] generating profile certs ...
	I1003 18:31:03.944708   64909 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key
	I1003 18:31:03.944722   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt with IP's: []
	I1003 18:31:04.171087   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt ...
	I1003 18:31:04.171118   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt: {Name:mked6cb0f731cbb630d2b187c4975015a458a284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.171291   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key ...
	I1003 18:31:04.171301   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key: {Name:mk0c9f0a0941d99f2af213cd316467f053532c99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.171391   64909 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905
	I1003 18:31:04.171406   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1003 18:31:04.383185   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 ...
	I1003 18:31:04.383218   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905: {Name:mkc24c55d4abb428b3559a93e6e301be2cab703a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.383381   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905 ...
	I1003 18:31:04.383394   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905: {Name:mk0576a73623089a3eecf4e34bbbd214545e2247 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.383486   64909 certs.go:382] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt
	I1003 18:31:04.383601   64909 certs.go:386] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905 -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key
	I1003 18:31:04.383674   64909 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key
	I1003 18:31:04.383689   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt with IP's: []
	I1003 18:31:04.628083   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt ...
	I1003 18:31:04.628112   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt: {Name:mkc19179c67a2559968759165df93d304eb42db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.628269   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key ...
	I1003 18:31:04.628279   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key: {Name:mka8b2392a3d721a70329b852837f3403643f948 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.628347   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 18:31:04.628364   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 18:31:04.628375   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 18:31:04.628384   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 18:31:04.628397   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 18:31:04.628410   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 18:31:04.628430   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 18:31:04.628442   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 18:31:04.628492   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem (1338 bytes)
	W1003 18:31:04.628525   64909 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212_empty.pem, impossibly tiny 0 bytes
	I1003 18:31:04.628535   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:31:04.628558   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:31:04.628580   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:31:04.628601   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem (1675 bytes)
	I1003 18:31:04.628637   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:31:04.628666   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem -> /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.628680   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.628692   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.629254   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:31:04.646879   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 18:31:04.663465   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:31:04.679837   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:31:04.695959   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 18:31:04.712689   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:31:04.729310   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:31:04.745587   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 18:31:04.761663   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem --> /usr/share/ca-certificates/12212.pem (1338 bytes)
	I1003 18:31:04.779546   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /usr/share/ca-certificates/122122.pem (1708 bytes)
	I1003 18:31:04.796119   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:31:04.813748   64909 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:31:04.826629   64909 ssh_runner.go:195] Run: openssl version
	I1003 18:31:04.832848   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122122.pem && ln -fs /usr/share/ca-certificates/122122.pem /etc/ssl/certs/122122.pem"
	I1003 18:31:04.840960   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.844465   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.844506   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.878276   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122122.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:31:04.886714   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:31:04.894672   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.898099   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.898154   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.931606   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:31:04.940357   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12212.pem && ln -fs /usr/share/ca-certificates/12212.pem /etc/ssl/certs/12212.pem"
	I1003 18:31:04.948454   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.952097   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.952148   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.985741   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12212.pem /etc/ssl/certs/51391683.0"
	I1003 18:31:04.994005   64909 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:31:04.997322   64909 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 18:31:04.997379   64909 kubeadm.go:400] StartCluster: {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:31:04.997476   64909 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:31:04.997539   64909 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:31:05.022530   64909 cri.go:89] found id: ""
	I1003 18:31:05.022595   64909 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:31:05.030329   64909 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 18:31:05.037782   64909 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:31:05.037841   64909 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:31:05.045127   64909 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:31:05.045142   64909 kubeadm.go:157] found existing configuration files:
	
	I1003 18:31:05.045174   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 18:31:05.052235   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:31:05.052286   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:31:05.059062   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 18:31:05.066034   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:31:05.066081   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:31:05.072912   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 18:31:05.079906   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:31:05.079966   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:31:05.086575   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 18:31:05.093500   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:31:05.093559   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:31:05.100246   64909 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:31:05.136174   64909 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:31:05.136254   64909 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:31:05.156320   64909 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:31:05.156407   64909 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 18:31:05.156462   64909 kubeadm.go:318] OS: Linux
	I1003 18:31:05.156539   64909 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:31:05.156610   64909 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:31:05.156705   64909 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:31:05.156790   64909 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:31:05.156865   64909 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:31:05.156939   64909 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:31:05.157035   64909 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:31:05.157127   64909 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 18:31:05.210250   64909 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:31:05.210408   64909 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:31:05.210566   64909 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:31:05.217643   64909 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:31:05.219725   64909 out.go:252]   - Generating certificates and keys ...
	I1003 18:31:05.219828   64909 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:31:05.219943   64909 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:31:05.398135   64909 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 18:31:05.511875   64909 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1003 18:31:05.863575   64909 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1003 18:31:06.044823   64909 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1003 18:31:06.083505   64909 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1003 18:31:06.083616   64909 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 18:31:06.181464   64909 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1003 18:31:06.181591   64909 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 18:31:06.345813   64909 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 18:31:06.565989   64909 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 18:31:06.759809   64909 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1003 18:31:06.759892   64909 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:31:06.883072   64909 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:31:07.211268   64909 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:31:07.403076   64909 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:31:07.687412   64909 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:31:08.052476   64909 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:31:08.052957   64909 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:31:08.054984   64909 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:31:08.056889   64909 out.go:252]   - Booting up control plane ...
	I1003 18:31:08.056984   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:31:08.057047   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:31:08.057102   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:31:08.069846   64909 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:31:08.069954   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:31:08.077490   64909 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:31:08.077826   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:31:08.077870   64909 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:31:08.170750   64909 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:31:08.170893   64909 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:31:09.172507   64909 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001794723s
	I1003 18:31:09.175233   64909 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:31:09.175335   64909 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1003 18:31:09.175418   64909 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:31:09.175496   64909 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:35:09.177158   64909 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001064557s
	I1003 18:35:09.177466   64909 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001283425s
	I1003 18:35:09.177673   64909 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00125879s
	I1003 18:35:09.177731   64909 kubeadm.go:318] 
	I1003 18:35:09.177887   64909 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 18:35:09.178114   64909 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:35:09.178320   64909 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 18:35:09.178580   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 18:35:09.178818   64909 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 18:35:09.179017   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 18:35:09.179033   64909 kubeadm.go:318] 
	I1003 18:35:09.182028   64909 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 18:35:09.182304   64909 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:35:09.182918   64909 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1003 18:35:09.183015   64909 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1003 18:35:09.183174   64909 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001794723s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001064557s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001283425s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00125879s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1003 18:35:09.183243   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1003 18:35:11.953646   64909 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.770379999s)
	I1003 18:35:11.953721   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:35:11.965876   64909 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:35:11.965928   64909 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:35:11.973363   64909 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:35:11.973382   64909 kubeadm.go:157] found existing configuration files:
	
	I1003 18:35:11.973419   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 18:35:11.980752   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:35:11.980806   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:35:11.987857   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 18:35:11.995081   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:35:11.995127   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:35:12.001778   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 18:35:12.009063   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:35:12.009126   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:35:12.015927   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 18:35:12.022875   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:35:12.022943   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:35:12.029549   64909 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:35:12.082477   64909 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 18:35:12.138594   64909 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:39:14.312592   64909 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	I1003 18:39:14.312818   64909 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1003 18:39:14.315914   64909 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:39:14.315992   64909 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:39:14.316115   64909 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:39:14.316166   64909 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 18:39:14.316250   64909 kubeadm.go:318] OS: Linux
	I1003 18:39:14.316328   64909 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:39:14.316401   64909 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:39:14.316475   64909 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:39:14.316553   64909 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:39:14.316624   64909 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:39:14.316701   64909 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:39:14.316751   64909 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:39:14.316825   64909 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 18:39:14.316936   64909 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:39:14.317123   64909 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:39:14.317262   64909 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:39:14.317314   64909 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:39:14.319872   64909 out.go:252]   - Generating certificates and keys ...
	I1003 18:39:14.319940   64909 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:39:14.320033   64909 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:39:14.320122   64909 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1003 18:39:14.320186   64909 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1003 18:39:14.320253   64909 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1003 18:39:14.320299   64909 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1003 18:39:14.320350   64909 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1003 18:39:14.320420   64909 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1003 18:39:14.320509   64909 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1003 18:39:14.320604   64909 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1003 18:39:14.320671   64909 kubeadm.go:318] [certs] Using the existing "sa" key
	I1003 18:39:14.320751   64909 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:39:14.320828   64909 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:39:14.320904   64909 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:39:14.321006   64909 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:39:14.321096   64909 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:39:14.321174   64909 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:39:14.321279   64909 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:39:14.321373   64909 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:39:14.322793   64909 out.go:252]   - Booting up control plane ...
	I1003 18:39:14.322884   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:39:14.323004   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:39:14.323072   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:39:14.323162   64909 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:39:14.323237   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:39:14.323335   64909 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:39:14.323415   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:39:14.323456   64909 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:39:14.323557   64909 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:39:14.323652   64909 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:39:14.323702   64909 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001540709s
	I1003 18:39:14.323792   64909 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:39:14.323860   64909 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1003 18:39:14.323946   64909 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:39:14.324043   64909 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:39:14.324124   64909 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	I1003 18:39:14.324186   64909 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	I1003 18:39:14.324248   64909 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	I1003 18:39:14.324258   64909 kubeadm.go:318] 
	I1003 18:39:14.324352   64909 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 18:39:14.324439   64909 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:39:14.324519   64909 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 18:39:14.324595   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 18:39:14.324687   64909 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 18:39:14.324773   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 18:39:14.324799   64909 kubeadm.go:318] 
	I1003 18:39:14.324836   64909 kubeadm.go:402] duration metric: took 8m9.327461574s to StartCluster
	I1003 18:39:14.324877   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:39:14.324935   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:39:14.352551   64909 cri.go:89] found id: ""
	I1003 18:39:14.352594   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.352608   64909 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:39:14.352617   64909 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:39:14.352684   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:39:14.376604   64909 cri.go:89] found id: ""
	I1003 18:39:14.376629   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.376638   64909 logs.go:284] No container was found matching "etcd"
	I1003 18:39:14.376643   64909 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:39:14.376750   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:39:14.401480   64909 cri.go:89] found id: ""
	I1003 18:39:14.401504   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.401512   64909 logs.go:284] No container was found matching "coredns"
	I1003 18:39:14.401517   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:39:14.401582   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:39:14.426822   64909 cri.go:89] found id: ""
	I1003 18:39:14.426858   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.426871   64909 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:39:14.426879   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:39:14.426946   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:39:14.451679   64909 cri.go:89] found id: ""
	I1003 18:39:14.451710   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.451722   64909 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:39:14.451730   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:39:14.451787   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:39:14.477253   64909 cri.go:89] found id: ""
	I1003 18:39:14.477275   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.477282   64909 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:39:14.477288   64909 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:39:14.477332   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:39:14.501586   64909 cri.go:89] found id: ""
	I1003 18:39:14.501613   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.501621   64909 logs.go:284] No container was found matching "kindnet"
	I1003 18:39:14.501632   64909 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:39:14.501643   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:39:14.561285   64909 logs.go:123] Gathering logs for container status ...
	I1003 18:39:14.561318   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:39:14.589589   64909 logs.go:123] Gathering logs for kubelet ...
	I1003 18:39:14.589614   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:39:14.656775   64909 logs.go:123] Gathering logs for dmesg ...
	I1003 18:39:14.656809   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:39:14.668000   64909 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:39:14.668023   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:39:14.725446   64909 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:39:14.718419    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.718941    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720510    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720909    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.722416    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:39:14.718419    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.718941    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720510    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720909    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.722416    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1003 18:39:14.725478   64909 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	W1003 18:39:14.725530   64909 out.go:285] * 
	W1003 18:39:14.725612   64909 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:39:14.725629   64909 out.go:285] * 
	W1003 18:39:14.727399   64909 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:39:14.731087   64909 out.go:203] 
	W1003 18:39:14.732560   64909 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:39:14.732585   64909 out.go:285] * 
	I1003 18:39:14.734183   64909 out.go:203] 
	
	
	==> CRI-O <==
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.920098966Z" level=info msg="createCtr: removing container 60eac4f05bb70cc097a023480fc9d2f45ed0628f63763a71867879f1fd5fa153" id=8ee50b88-f594-4d65-81a3-5ff4b08ba0ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.920129084Z" level=info msg="createCtr: deleting container 60eac4f05bb70cc097a023480fc9d2f45ed0628f63763a71867879f1fd5fa153 from storage" id=8ee50b88-f594-4d65-81a3-5ff4b08ba0ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.922274937Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-422561_kube-system_6803106e6cb30e1b9b282ce29772fddf_0" id=8ee50b88-f594-4d65-81a3-5ff4b08ba0ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.895966159Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=5d5ebf70-cac3-422d-8424-70692dea829d name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.896076791Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=49f23445-fb1d-4650-aca8-7186c3d76e4e name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.89680709Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=1ea2be83-eafe-478b-86f5-ff2b9b2e9177 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.896872043Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=c4d1f694-5260-4074-8a93-156d0e025c5f name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.897665559Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-422561/kube-apiserver" id=6406fac4-1b44-4912-9cfd-8fddc1257c83 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.897818486Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-422561/kube-controller-manager" id=a270eb16-d817-4f6a-a2b8-ec941dc0bda5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.897895229Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.898053794Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.903279482Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.903713949Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.905147304Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.906535458Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.925936651Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=6406fac4-1b44-4912-9cfd-8fddc1257c83 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.927378667Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=a270eb16-d817-4f6a-a2b8-ec941dc0bda5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.927499752Z" level=info msg="createCtr: deleting container ID ea64bda413ffe4bf43dae710ca0af55cb5bf7537c29d07d52d6f7dc57d31729b from idIndex" id=6406fac4-1b44-4912-9cfd-8fddc1257c83 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.927528924Z" level=info msg="createCtr: removing container ea64bda413ffe4bf43dae710ca0af55cb5bf7537c29d07d52d6f7dc57d31729b" id=6406fac4-1b44-4912-9cfd-8fddc1257c83 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.927557417Z" level=info msg="createCtr: deleting container ea64bda413ffe4bf43dae710ca0af55cb5bf7537c29d07d52d6f7dc57d31729b from storage" id=6406fac4-1b44-4912-9cfd-8fddc1257c83 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.928799426Z" level=info msg="createCtr: deleting container ID e2f4b8a4b4eb69392834fbdf154cc4c03d0594e25846b955a947d26192dbeeb2 from idIndex" id=a270eb16-d817-4f6a-a2b8-ec941dc0bda5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.92883909Z" level=info msg="createCtr: removing container e2f4b8a4b4eb69392834fbdf154cc4c03d0594e25846b955a947d26192dbeeb2" id=a270eb16-d817-4f6a-a2b8-ec941dc0bda5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.92887498Z" level=info msg="createCtr: deleting container e2f4b8a4b4eb69392834fbdf154cc4c03d0594e25846b955a947d26192dbeeb2 from storage" id=a270eb16-d817-4f6a-a2b8-ec941dc0bda5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.930691085Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-422561_kube-system_6ecf19dd95945fcfeaff027fad95c1ee_0" id=6406fac4-1b44-4912-9cfd-8fddc1257c83 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.931071975Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-422561_kube-system_e643a03771f1e72f527532eff2c66a9c_0" id=a270eb16-d817-4f6a-a2b8-ec941dc0bda5 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:40:55.158263    3544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:40:55.158755    3544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:40:55.160290    3544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:40:55.160675    3544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:40:55.162287    3544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 3 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001870] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.374530] i8042: Warning: Keylock active
	[  +0.010846] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003424] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000658] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000637] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000691] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.479345] block sda: the capability attribute has been deprecated.
	[  +0.086934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025583] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.992810] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:40:55 up  1:23,  0 user,  load average: 0.12, 0.08, 0.07
	Linux ha-422561 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 03 18:40:46 ha-422561 kubelet[1961]:         container etcd start failed in pod etcd-ha-422561_kube-system(6803106e6cb30e1b9b282ce29772fddf): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:40:46 ha-422561 kubelet[1961]:  > logger="UnhandledError"
	Oct 03 18:40:46 ha-422561 kubelet[1961]: E1003 18:40:46.922710    1961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-422561" podUID="6803106e6cb30e1b9b282ce29772fddf"
	Oct 03 18:40:48 ha-422561 kubelet[1961]: E1003 18:40:48.535582    1961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-422561?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 03 18:40:48 ha-422561 kubelet[1961]: I1003 18:40:48.695745    1961 kubelet_node_status.go:75] "Attempting to register node" node="ha-422561"
	Oct 03 18:40:48 ha-422561 kubelet[1961]: E1003 18:40:48.696172    1961 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-422561"
	Oct 03 18:40:49 ha-422561 kubelet[1961]: E1003 18:40:49.418760    1961 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.349401    1961 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-422561.186b0ef272ca351c  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-422561,UID:ha-422561,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-422561 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-422561,},FirstTimestamp:2025-10-03 18:35:13.889039644 +0000 UTC m=+0.583846472,LastTimestamp:2025-10-03 18:35:13.889039644 +0000 UTC m=+0.583846472,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-422561,}"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.895596    1961 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.895738    1961 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.916294    1961 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-422561\" not found"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.930954    1961 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:40:53 ha-422561 kubelet[1961]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:40:53 ha-422561 kubelet[1961]:  > podSandboxID="a859763ae69d997e72724d21d35d0ae86fcde7bd11468ef604f5a6d23f35b0f0"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.931068    1961 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:40:53 ha-422561 kubelet[1961]:         container kube-apiserver start failed in pod kube-apiserver-ha-422561_kube-system(6ecf19dd95945fcfeaff027fad95c1ee): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:40:53 ha-422561 kubelet[1961]:  > logger="UnhandledError"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.931108    1961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-422561" podUID="6ecf19dd95945fcfeaff027fad95c1ee"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.931305    1961 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:40:53 ha-422561 kubelet[1961]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:40:53 ha-422561 kubelet[1961]:  > podSandboxID="2bca45b92f4f55f540f80dd9d8d3d282362f7f0ecce2ac4786e27a3b4a9cfd4d"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.931391    1961 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:40:53 ha-422561 kubelet[1961]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-422561_kube-system(e643a03771f1e72f527532eff2c66a9c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:40:53 ha-422561 kubelet[1961]:  > logger="UnhandledError"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.932375    1961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-422561" podUID="e643a03771f1e72f527532eff2c66a9c"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561: exit status 6 (294.418638ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:40:55.524678   73191 status.go:458] kubeconfig endpoint: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-422561" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (1.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:305: expected profile "ha-422561" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-422561\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-422561\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nf
sshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-422561\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonIm
ages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:309: expected profile "ha-422561" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-422561\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-422561\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-422561\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\
"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --
output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-422561
helpers_test.go:243: (dbg) docker inspect ha-422561:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	        "Created": "2025-10-03T18:31:00.396132938Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 65481,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T18:31:00.428325646Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hostname",
	        "HostsPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hosts",
	        "LogPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512-json.log",
	        "Name": "/ha-422561",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-422561:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-422561",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	                "LowerDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94-init/diff:/var/lib/docker/overlay2/6a517a7375440eba803d7b83fe1e0821915758396dd4d8556ab64fff322a60c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-422561",
	                "Source": "/var/lib/docker/volumes/ha-422561/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-422561",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-422561",
	                "name.minikube.sigs.k8s.io": "ha-422561",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3084976d568ce061948ebe671f279a80502b1d28417f2be7c2497961eac2a5aa",
	            "SandboxKey": "/var/run/docker/netns/3084976d568c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-422561": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:e4:3c:eb:d3:38",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de6aa7ca29f453c0d15cb280abde7ee215f554c89e78e3db8a0f7590468114b5",
	                    "EndpointID": "1b961733d045b77a64efb8afa6caa273125f56ec888f823b790f5454f23ca3b7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-422561",
	                        "eef8fc426b2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561: exit status 6 (289.777575ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:40:56.138794   73434 status.go:458] kubeconfig endpoint: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterClusterStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterClusterStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-889240 ssh pgrep buildkitd                                                                           │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ image   │ functional-889240 image ls --format json --alsologtostderr                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image   │ functional-889240 image ls --format table --alsologtostderr                                                     │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image   │ functional-889240 image build -t localhost/my-image:functional-889240 testdata/build --alsologtostderr          │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:27 UTC │
	│ image   │ functional-889240 image ls                                                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:27 UTC │ 03 Oct 25 18:27 UTC │
	│ delete  │ -p functional-889240                                                                                            │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:30 UTC │ 03 Oct 25 18:30 UTC │
	│ start   │ ha-422561 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:30 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- rollout status deployment/busybox                                                          │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node add --alsologtostderr -v 5                                                                       │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:30:55
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:30:55.351405   64909 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:30:55.351662   64909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:55.351671   64909 out.go:374] Setting ErrFile to fd 2...
	I1003 18:30:55.351675   64909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:55.351854   64909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:30:55.352339   64909 out.go:368] Setting JSON to false
	I1003 18:30:55.353203   64909 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4406,"bootTime":1759511849,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:30:55.353289   64909 start.go:140] virtualization: kvm guest
	I1003 18:30:55.355458   64909 out.go:179] * [ha-422561] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 18:30:55.356815   64909 notify.go:220] Checking for updates...
	I1003 18:30:55.356884   64909 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:30:55.358389   64909 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:30:55.359964   64909 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:30:55.361351   64909 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:30:55.362647   64909 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:30:55.363956   64909 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:30:55.365351   64909 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:30:55.387768   64909 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:30:55.387885   64909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:30:55.443407   64909 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:30:55.433728571 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:30:55.443516   64909 docker.go:318] overlay module found
	I1003 18:30:55.445440   64909 out.go:179] * Using the docker driver based on user configuration
	I1003 18:30:55.446777   64909 start.go:304] selected driver: docker
	I1003 18:30:55.446793   64909 start.go:924] validating driver "docker" against <nil>
	I1003 18:30:55.446808   64909 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:30:55.447403   64909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:30:55.498777   64909 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:30:55.489521827 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:30:55.498958   64909 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1003 18:30:55.499206   64909 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:30:55.501187   64909 out.go:179] * Using Docker driver with root privileges
	I1003 18:30:55.502312   64909 cni.go:84] Creating CNI manager for ""
	I1003 18:30:55.502386   64909 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1003 18:30:55.502397   64909 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 18:30:55.502459   64909 start.go:348] cluster config:
	{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1003 18:30:55.503779   64909 out.go:179] * Starting "ha-422561" primary control-plane node in "ha-422561" cluster
	I1003 18:30:55.504816   64909 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:30:55.506028   64909 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:30:55.507131   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:30:55.507167   64909 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 18:30:55.507169   64909 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:30:55.507175   64909 cache.go:58] Caching tarball of preloaded images
	I1003 18:30:55.507294   64909 preload.go:233] Found /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1003 18:30:55.507311   64909 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 18:30:55.507736   64909 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:30:55.507764   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json: {Name:mk1ece959bac74a473416f0dfc8af04a6136d7b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:30:55.527458   64909 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 18:30:55.527478   64909 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 18:30:55.527494   64909 cache.go:232] Successfully downloaded all kic artifacts
	I1003 18:30:55.527527   64909 start.go:360] acquireMachinesLock for ha-422561: {Name:mk32fd04a5d9b5f89831583bab7d7527f4d187a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:30:55.527631   64909 start.go:364] duration metric: took 81.336µs to acquireMachinesLock for "ha-422561"
	I1003 18:30:55.527657   64909 start.go:93] Provisioning new machine with config: &{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 18:30:55.527748   64909 start.go:125] createHost starting for "" (driver="docker")
	I1003 18:30:55.529663   64909 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1003 18:30:55.529898   64909 start.go:159] libmachine.API.Create for "ha-422561" (driver="docker")
	I1003 18:30:55.529933   64909 client.go:168] LocalClient.Create starting
	I1003 18:30:55.530028   64909 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem
	I1003 18:30:55.530072   64909 main.go:141] libmachine: Decoding PEM data...
	I1003 18:30:55.530097   64909 main.go:141] libmachine: Parsing certificate...
	I1003 18:30:55.530187   64909 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem
	I1003 18:30:55.530226   64909 main.go:141] libmachine: Decoding PEM data...
	I1003 18:30:55.530238   64909 main.go:141] libmachine: Parsing certificate...
	I1003 18:30:55.530612   64909 cli_runner.go:164] Run: docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 18:30:55.547068   64909 cli_runner.go:211] docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 18:30:55.547129   64909 network_create.go:284] running [docker network inspect ha-422561] to gather additional debugging logs...
	I1003 18:30:55.547146   64909 cli_runner.go:164] Run: docker network inspect ha-422561
	W1003 18:30:55.563141   64909 cli_runner.go:211] docker network inspect ha-422561 returned with exit code 1
	I1003 18:30:55.563167   64909 network_create.go:287] error running [docker network inspect ha-422561]: docker network inspect ha-422561: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-422561 not found
	I1003 18:30:55.563179   64909 network_create.go:289] output of [docker network inspect ha-422561]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-422561 not found
	
	** /stderr **
	I1003 18:30:55.563276   64909 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:30:55.579301   64909 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00157b3a0}
	I1003 18:30:55.579336   64909 network_create.go:124] attempt to create docker network ha-422561 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1003 18:30:55.579388   64909 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-422561 ha-422561
	I1003 18:30:55.634233   64909 network_create.go:108] docker network ha-422561 192.168.49.0/24 created
	I1003 18:30:55.634260   64909 kic.go:121] calculated static IP "192.168.49.2" for the "ha-422561" container
	I1003 18:30:55.634318   64909 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 18:30:55.649960   64909 cli_runner.go:164] Run: docker volume create ha-422561 --label name.minikube.sigs.k8s.io=ha-422561 --label created_by.minikube.sigs.k8s.io=true
	I1003 18:30:55.667186   64909 oci.go:103] Successfully created a docker volume ha-422561
	I1003 18:30:55.667250   64909 cli_runner.go:164] Run: docker run --rm --name ha-422561-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-422561 --entrypoint /usr/bin/test -v ha-422561:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1003 18:30:56.041615   64909 oci.go:107] Successfully prepared a docker volume ha-422561
	I1003 18:30:56.041648   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:30:56.041669   64909 kic.go:194] Starting extracting preloaded images to volume ...
	I1003 18:30:56.041727   64909 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-422561:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1003 18:31:00.326417   64909 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-422561:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.284654466s)
	I1003 18:31:00.326457   64909 kic.go:203] duration metric: took 4.284784967s to extract preloaded images to volume ...
	W1003 18:31:00.326567   64909 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1003 18:31:00.326610   64909 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1003 18:31:00.326657   64909 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1003 18:31:00.381592   64909 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-422561 --name ha-422561 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-422561 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-422561 --network ha-422561 --ip 192.168.49.2 --volume ha-422561:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1003 18:31:00.641348   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Running}}
	I1003 18:31:00.659876   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:00.678319   64909 cli_runner.go:164] Run: docker exec ha-422561 stat /var/lib/dpkg/alternatives/iptables
	I1003 18:31:00.728414   64909 oci.go:144] the created container "ha-422561" has a running status.
	I1003 18:31:00.728450   64909 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa...
	I1003 18:31:01.103610   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1003 18:31:01.103663   64909 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1003 18:31:01.128670   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:01.147200   64909 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1003 18:31:01.147218   64909 kic_runner.go:114] Args: [docker exec --privileged ha-422561 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1003 18:31:01.189023   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:01.207395   64909 machine.go:93] provisionDockerMachine start ...
	I1003 18:31:01.207497   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.226029   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.226282   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.226299   64909 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 18:31:01.372245   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:31:01.372275   64909 ubuntu.go:182] provisioning hostname "ha-422561"
	I1003 18:31:01.372335   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.390674   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.390889   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.390902   64909 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-422561 && echo "ha-422561" | sudo tee /etc/hostname
	I1003 18:31:01.544850   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:31:01.544932   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.563695   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.563966   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.564014   64909 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422561' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422561/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422561' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:31:01.708942   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:31:01.708971   64909 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8669/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8669/.minikube}
	I1003 18:31:01.709036   64909 ubuntu.go:190] setting up certificates
	I1003 18:31:01.709048   64909 provision.go:84] configureAuth start
	I1003 18:31:01.709101   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:01.727778   64909 provision.go:143] copyHostCerts
	I1003 18:31:01.727814   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:31:01.727849   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem, removing ...
	I1003 18:31:01.727858   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:31:01.727940   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem (1082 bytes)
	I1003 18:31:01.728054   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:31:01.728079   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem, removing ...
	I1003 18:31:01.728090   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:31:01.728137   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem (1123 bytes)
	I1003 18:31:01.728200   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:31:01.728225   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem, removing ...
	I1003 18:31:01.728234   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:31:01.728266   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem (1675 bytes)
	I1003 18:31:01.728336   64909 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem org=jenkins.ha-422561 san=[127.0.0.1 192.168.49.2 ha-422561 localhost minikube]
	I1003 18:31:01.864219   64909 provision.go:177] copyRemoteCerts
	I1003 18:31:01.864281   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:31:01.864317   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.882069   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:01.982800   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 18:31:01.982877   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1003 18:31:02.000887   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 18:31:02.000952   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 18:31:02.017591   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 18:31:02.017639   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 18:31:02.034172   64909 provision.go:87] duration metric: took 325.10989ms to configureAuth
	I1003 18:31:02.034202   64909 ubuntu.go:206] setting minikube options for container-runtime
	I1003 18:31:02.034393   64909 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:31:02.034508   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.052111   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:02.052326   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:02.052344   64909 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 18:31:02.295594   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 18:31:02.295629   64909 machine.go:96] duration metric: took 1.088207423s to provisionDockerMachine
	I1003 18:31:02.295640   64909 client.go:171] duration metric: took 6.765697238s to LocalClient.Create
	I1003 18:31:02.295660   64909 start.go:167] duration metric: took 6.765761646s to libmachine.API.Create "ha-422561"
	I1003 18:31:02.295669   64909 start.go:293] postStartSetup for "ha-422561" (driver="docker")
	I1003 18:31:02.295682   64909 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:31:02.295752   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:31:02.295789   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.312783   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.414720   64909 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:31:02.418127   64909 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:31:02.418149   64909 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 18:31:02.418159   64909 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/addons for local assets ...
	I1003 18:31:02.418213   64909 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/files for local assets ...
	I1003 18:31:02.418310   64909 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> 122122.pem in /etc/ssl/certs
	I1003 18:31:02.418326   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /etc/ssl/certs/122122.pem
	I1003 18:31:02.418453   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 18:31:02.425623   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:31:02.444405   64909 start.go:296] duration metric: took 148.722871ms for postStartSetup
	I1003 18:31:02.444748   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:02.462226   64909 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:31:02.462456   64909 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:31:02.462495   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.478737   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.575846   64909 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:31:02.580138   64909 start.go:128] duration metric: took 7.052376255s to createHost
	I1003 18:31:02.580160   64909 start.go:83] releasing machines lock for "ha-422561", held for 7.052515614s
	I1003 18:31:02.580230   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:02.596730   64909 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:31:02.596776   64909 ssh_runner.go:195] Run: cat /version.json
	I1003 18:31:02.596798   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.596817   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.613783   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.614183   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.764865   64909 ssh_runner.go:195] Run: systemctl --version
	I1003 18:31:02.771251   64909 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 18:31:02.803643   64909 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 18:31:02.807949   64909 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 18:31:02.808044   64909 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 18:31:02.833024   64909 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 18:31:02.833043   64909 start.go:495] detecting cgroup driver to use...
	I1003 18:31:02.833073   64909 detect.go:190] detected "systemd" cgroup driver on host os
	I1003 18:31:02.833108   64909 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 18:31:02.847613   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 18:31:02.858865   64909 docker.go:218] disabling cri-docker service (if available) ...
	I1003 18:31:02.858910   64909 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 18:31:02.874470   64909 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 18:31:02.890554   64909 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 18:31:02.970342   64909 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 18:31:03.055310   64909 docker.go:234] disabling docker service ...
	I1003 18:31:03.055369   64909 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 18:31:03.072668   64909 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 18:31:03.084308   64909 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 18:31:03.163959   64909 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 18:31:03.241930   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 18:31:03.253863   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:31:03.266905   64909 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 18:31:03.266971   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.276795   64909 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 18:31:03.276848   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.285157   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.293117   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.301070   64909 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:31:03.308489   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.316789   64909 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.329424   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.337651   64909 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:31:03.344839   64909 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:31:03.352026   64909 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:31:03.430894   64909 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 18:31:03.533915   64909 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 18:31:03.534002   64909 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 18:31:03.537783   64909 start.go:563] Will wait 60s for crictl version
	I1003 18:31:03.537838   64909 ssh_runner.go:195] Run: which crictl
	I1003 18:31:03.541393   64909 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 18:31:03.564883   64909 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 18:31:03.564963   64909 ssh_runner.go:195] Run: crio --version
	I1003 18:31:03.591363   64909 ssh_runner.go:195] Run: crio --version
	I1003 18:31:03.619425   64909 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 18:31:03.620466   64909 cli_runner.go:164] Run: docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:31:03.637151   64909 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 18:31:03.641184   64909 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:31:03.651292   64909 kubeadm.go:883] updating cluster {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 18:31:03.651379   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:31:03.651428   64909 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:31:03.680883   64909 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:31:03.680904   64909 crio.go:433] Images already preloaded, skipping extraction
	I1003 18:31:03.680955   64909 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:31:03.706829   64909 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:31:03.706859   64909 cache_images.go:85] Images are preloaded, skipping loading
	I1003 18:31:03.706866   64909 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1003 18:31:03.706953   64909 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422561 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 18:31:03.707032   64909 ssh_runner.go:195] Run: crio config
	I1003 18:31:03.751501   64909 cni.go:84] Creating CNI manager for ""
	I1003 18:31:03.751523   64909 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 18:31:03.751538   64909 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 18:31:03.751558   64909 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-422561 NodeName:ha-422561 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 18:31:03.751669   64909 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-422561"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:31:03.751691   64909 kube-vip.go:115] generating kube-vip config ...
	I1003 18:31:03.751728   64909 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1003 18:31:03.763009   64909 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:31:03.763125   64909 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1003 18:31:03.763181   64909 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 18:31:03.770585   64909 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:31:03.770633   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1003 18:31:03.778069   64909 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1003 18:31:03.790397   64909 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 18:31:03.805112   64909 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1003 18:31:03.817362   64909 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1003 18:31:03.830824   64909 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1003 18:31:03.834300   64909 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:31:03.843861   64909 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:31:03.921407   64909 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:31:03.944431   64909 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561 for IP: 192.168.49.2
	I1003 18:31:03.944451   64909 certs.go:195] generating shared ca certs ...
	I1003 18:31:03.944468   64909 certs.go:227] acquiring lock for ca certs: {Name:mk92d1e8e469cb44d9924ff8abf5ecf0a8ce4e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:03.944607   64909 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key
	I1003 18:31:03.944644   64909 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key
	I1003 18:31:03.944652   64909 certs.go:257] generating profile certs ...
	I1003 18:31:03.944708   64909 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key
	I1003 18:31:03.944722   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt with IP's: []
	I1003 18:31:04.171087   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt ...
	I1003 18:31:04.171118   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt: {Name:mked6cb0f731cbb630d2b187c4975015a458a284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.171291   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key ...
	I1003 18:31:04.171301   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key: {Name:mk0c9f0a0941d99f2af213cd316467f053532c99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.171391   64909 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905
	I1003 18:31:04.171406   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1003 18:31:04.383185   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 ...
	I1003 18:31:04.383218   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905: {Name:mkc24c55d4abb428b3559a93e6e301be2cab703a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.383381   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905 ...
	I1003 18:31:04.383394   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905: {Name:mk0576a73623089a3eecf4e34bbbd214545e2247 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.383486   64909 certs.go:382] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt
	I1003 18:31:04.383601   64909 certs.go:386] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905 -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key
	I1003 18:31:04.383674   64909 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key
	I1003 18:31:04.383689   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt with IP's: []
	I1003 18:31:04.628083   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt ...
	I1003 18:31:04.628112   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt: {Name:mkc19179c67a2559968759165df93d304eb42db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.628269   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key ...
	I1003 18:31:04.628279   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key: {Name:mka8b2392a3d721a70329b852837f3403643f948 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.628347   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 18:31:04.628364   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 18:31:04.628375   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 18:31:04.628384   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 18:31:04.628397   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 18:31:04.628410   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 18:31:04.628430   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 18:31:04.628442   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 18:31:04.628492   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem (1338 bytes)
	W1003 18:31:04.628525   64909 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212_empty.pem, impossibly tiny 0 bytes
	I1003 18:31:04.628535   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:31:04.628558   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:31:04.628580   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:31:04.628601   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem (1675 bytes)
	I1003 18:31:04.628637   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:31:04.628666   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem -> /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.628680   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.628692   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.629254   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:31:04.646879   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 18:31:04.663465   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:31:04.679837   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:31:04.695959   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 18:31:04.712689   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:31:04.729310   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:31:04.745587   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 18:31:04.761663   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem --> /usr/share/ca-certificates/12212.pem (1338 bytes)
	I1003 18:31:04.779546   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /usr/share/ca-certificates/122122.pem (1708 bytes)
	I1003 18:31:04.796119   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:31:04.813748   64909 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:31:04.826629   64909 ssh_runner.go:195] Run: openssl version
	I1003 18:31:04.832848   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122122.pem && ln -fs /usr/share/ca-certificates/122122.pem /etc/ssl/certs/122122.pem"
	I1003 18:31:04.840960   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.844465   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.844506   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.878276   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122122.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:31:04.886714   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:31:04.894672   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.898099   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.898154   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.931606   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:31:04.940357   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12212.pem && ln -fs /usr/share/ca-certificates/12212.pem /etc/ssl/certs/12212.pem"
	I1003 18:31:04.948454   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.952097   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.952148   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.985741   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12212.pem /etc/ssl/certs/51391683.0"
	I1003 18:31:04.994005   64909 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:31:04.997322   64909 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 18:31:04.997379   64909 kubeadm.go:400] StartCluster: {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:31:04.997476   64909 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:31:04.997539   64909 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:31:05.022530   64909 cri.go:89] found id: ""
	I1003 18:31:05.022595   64909 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:31:05.030329   64909 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 18:31:05.037782   64909 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:31:05.037841   64909 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:31:05.045127   64909 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:31:05.045142   64909 kubeadm.go:157] found existing configuration files:
	
	I1003 18:31:05.045174   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 18:31:05.052235   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:31:05.052286   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:31:05.059062   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 18:31:05.066034   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:31:05.066081   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:31:05.072912   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 18:31:05.079906   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:31:05.079966   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:31:05.086575   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 18:31:05.093500   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:31:05.093559   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:31:05.100246   64909 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:31:05.136174   64909 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:31:05.136254   64909 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:31:05.156320   64909 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:31:05.156407   64909 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 18:31:05.156462   64909 kubeadm.go:318] OS: Linux
	I1003 18:31:05.156539   64909 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:31:05.156610   64909 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:31:05.156705   64909 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:31:05.156790   64909 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:31:05.156865   64909 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:31:05.156939   64909 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:31:05.157035   64909 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:31:05.157127   64909 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 18:31:05.210250   64909 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:31:05.210408   64909 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:31:05.210566   64909 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:31:05.217643   64909 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:31:05.219725   64909 out.go:252]   - Generating certificates and keys ...
	I1003 18:31:05.219828   64909 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:31:05.219943   64909 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:31:05.398135   64909 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 18:31:05.511875   64909 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1003 18:31:05.863575   64909 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1003 18:31:06.044823   64909 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1003 18:31:06.083505   64909 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1003 18:31:06.083616   64909 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 18:31:06.181464   64909 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1003 18:31:06.181591   64909 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 18:31:06.345813   64909 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 18:31:06.565989   64909 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 18:31:06.759809   64909 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1003 18:31:06.759892   64909 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:31:06.883072   64909 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:31:07.211268   64909 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:31:07.403076   64909 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:31:07.687412   64909 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:31:08.052476   64909 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:31:08.052957   64909 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:31:08.054984   64909 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:31:08.056889   64909 out.go:252]   - Booting up control plane ...
	I1003 18:31:08.056984   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:31:08.057047   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:31:08.057102   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:31:08.069846   64909 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:31:08.069954   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:31:08.077490   64909 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:31:08.077826   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:31:08.077870   64909 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:31:08.170750   64909 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:31:08.170893   64909 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:31:09.172507   64909 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001794723s
	I1003 18:31:09.175233   64909 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:31:09.175335   64909 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1003 18:31:09.175418   64909 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:31:09.175496   64909 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:35:09.177158   64909 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001064557s
	I1003 18:35:09.177466   64909 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001283425s
	I1003 18:35:09.177673   64909 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00125879s
	I1003 18:35:09.177731   64909 kubeadm.go:318] 
	I1003 18:35:09.177887   64909 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 18:35:09.178114   64909 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:35:09.178320   64909 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 18:35:09.178580   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 18:35:09.178818   64909 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 18:35:09.179017   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 18:35:09.179033   64909 kubeadm.go:318] 
	I1003 18:35:09.182028   64909 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 18:35:09.182304   64909 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:35:09.182918   64909 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1003 18:35:09.183015   64909 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1003 18:35:09.183174   64909 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001794723s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001064557s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001283425s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00125879s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1003 18:35:09.183243   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1003 18:35:11.953646   64909 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.770379999s)
	I1003 18:35:11.953721   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:35:11.965876   64909 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:35:11.965928   64909 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:35:11.973363   64909 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:35:11.973382   64909 kubeadm.go:157] found existing configuration files:
	
	I1003 18:35:11.973419   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 18:35:11.980752   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:35:11.980806   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:35:11.987857   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 18:35:11.995081   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:35:11.995127   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:35:12.001778   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 18:35:12.009063   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:35:12.009126   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:35:12.015927   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 18:35:12.022875   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:35:12.022943   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:35:12.029549   64909 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:35:12.082477   64909 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 18:35:12.138594   64909 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:39:14.312592   64909 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	I1003 18:39:14.312818   64909 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1003 18:39:14.315914   64909 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:39:14.315992   64909 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:39:14.316115   64909 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:39:14.316166   64909 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 18:39:14.316250   64909 kubeadm.go:318] OS: Linux
	I1003 18:39:14.316328   64909 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:39:14.316401   64909 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:39:14.316475   64909 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:39:14.316553   64909 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:39:14.316624   64909 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:39:14.316701   64909 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:39:14.316751   64909 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:39:14.316825   64909 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 18:39:14.316936   64909 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:39:14.317123   64909 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:39:14.317262   64909 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:39:14.317314   64909 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:39:14.319872   64909 out.go:252]   - Generating certificates and keys ...
	I1003 18:39:14.319940   64909 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:39:14.320033   64909 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:39:14.320122   64909 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1003 18:39:14.320186   64909 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1003 18:39:14.320253   64909 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1003 18:39:14.320299   64909 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1003 18:39:14.320350   64909 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1003 18:39:14.320420   64909 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1003 18:39:14.320509   64909 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1003 18:39:14.320604   64909 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1003 18:39:14.320671   64909 kubeadm.go:318] [certs] Using the existing "sa" key
	I1003 18:39:14.320751   64909 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:39:14.320828   64909 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:39:14.320904   64909 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:39:14.321006   64909 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:39:14.321096   64909 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:39:14.321174   64909 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:39:14.321279   64909 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:39:14.321373   64909 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:39:14.322793   64909 out.go:252]   - Booting up control plane ...
	I1003 18:39:14.322884   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:39:14.323004   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:39:14.323072   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:39:14.323162   64909 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:39:14.323237   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:39:14.323335   64909 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:39:14.323415   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:39:14.323456   64909 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:39:14.323557   64909 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:39:14.323652   64909 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:39:14.323702   64909 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001540709s
	I1003 18:39:14.323792   64909 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:39:14.323860   64909 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1003 18:39:14.323946   64909 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:39:14.324043   64909 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:39:14.324124   64909 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	I1003 18:39:14.324186   64909 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	I1003 18:39:14.324248   64909 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	I1003 18:39:14.324258   64909 kubeadm.go:318] 
	I1003 18:39:14.324352   64909 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 18:39:14.324439   64909 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:39:14.324519   64909 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 18:39:14.324595   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 18:39:14.324687   64909 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 18:39:14.324773   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 18:39:14.324799   64909 kubeadm.go:318] 
	I1003 18:39:14.324836   64909 kubeadm.go:402] duration metric: took 8m9.327461574s to StartCluster
	I1003 18:39:14.324877   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:39:14.324935   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:39:14.352551   64909 cri.go:89] found id: ""
	I1003 18:39:14.352594   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.352608   64909 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:39:14.352617   64909 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:39:14.352684   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:39:14.376604   64909 cri.go:89] found id: ""
	I1003 18:39:14.376629   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.376638   64909 logs.go:284] No container was found matching "etcd"
	I1003 18:39:14.376643   64909 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:39:14.376750   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:39:14.401480   64909 cri.go:89] found id: ""
	I1003 18:39:14.401504   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.401512   64909 logs.go:284] No container was found matching "coredns"
	I1003 18:39:14.401517   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:39:14.401582   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:39:14.426822   64909 cri.go:89] found id: ""
	I1003 18:39:14.426858   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.426871   64909 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:39:14.426879   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:39:14.426946   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:39:14.451679   64909 cri.go:89] found id: ""
	I1003 18:39:14.451710   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.451722   64909 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:39:14.451730   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:39:14.451787   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:39:14.477253   64909 cri.go:89] found id: ""
	I1003 18:39:14.477275   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.477282   64909 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:39:14.477288   64909 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:39:14.477332   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:39:14.501586   64909 cri.go:89] found id: ""
	I1003 18:39:14.501613   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.501621   64909 logs.go:284] No container was found matching "kindnet"
	I1003 18:39:14.501632   64909 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:39:14.501643   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:39:14.561285   64909 logs.go:123] Gathering logs for container status ...
	I1003 18:39:14.561318   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:39:14.589589   64909 logs.go:123] Gathering logs for kubelet ...
	I1003 18:39:14.589614   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:39:14.656775   64909 logs.go:123] Gathering logs for dmesg ...
	I1003 18:39:14.656809   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:39:14.668000   64909 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:39:14.668023   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:39:14.725446   64909 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:39:14.718419    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.718941    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720510    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720909    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.722416    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:39:14.718419    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.718941    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720510    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720909    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.722416    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1003 18:39:14.725478   64909 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	W1003 18:39:14.725530   64909 out.go:285] * 
	W1003 18:39:14.725612   64909 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:39:14.725629   64909 out.go:285] * 
	W1003 18:39:14.727399   64909 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:39:14.731087   64909 out.go:203] 
	W1003 18:39:14.732560   64909 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:39:14.732585   64909 out.go:285] * 
	I1003 18:39:14.734183   64909 out.go:203] 
	
	
	==> CRI-O <==
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.920098966Z" level=info msg="createCtr: removing container 60eac4f05bb70cc097a023480fc9d2f45ed0628f63763a71867879f1fd5fa153" id=8ee50b88-f594-4d65-81a3-5ff4b08ba0ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.920129084Z" level=info msg="createCtr: deleting container 60eac4f05bb70cc097a023480fc9d2f45ed0628f63763a71867879f1fd5fa153 from storage" id=8ee50b88-f594-4d65-81a3-5ff4b08ba0ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.922274937Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-422561_kube-system_6803106e6cb30e1b9b282ce29772fddf_0" id=8ee50b88-f594-4d65-81a3-5ff4b08ba0ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.895966159Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=5d5ebf70-cac3-422d-8424-70692dea829d name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.896076791Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=49f23445-fb1d-4650-aca8-7186c3d76e4e name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.89680709Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=1ea2be83-eafe-478b-86f5-ff2b9b2e9177 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.896872043Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=c4d1f694-5260-4074-8a93-156d0e025c5f name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.897665559Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-422561/kube-apiserver" id=6406fac4-1b44-4912-9cfd-8fddc1257c83 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.897818486Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-422561/kube-controller-manager" id=a270eb16-d817-4f6a-a2b8-ec941dc0bda5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.897895229Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.898053794Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.903279482Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.903713949Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.905147304Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.906535458Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.925936651Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=6406fac4-1b44-4912-9cfd-8fddc1257c83 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.927378667Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=a270eb16-d817-4f6a-a2b8-ec941dc0bda5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.927499752Z" level=info msg="createCtr: deleting container ID ea64bda413ffe4bf43dae710ca0af55cb5bf7537c29d07d52d6f7dc57d31729b from idIndex" id=6406fac4-1b44-4912-9cfd-8fddc1257c83 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.927528924Z" level=info msg="createCtr: removing container ea64bda413ffe4bf43dae710ca0af55cb5bf7537c29d07d52d6f7dc57d31729b" id=6406fac4-1b44-4912-9cfd-8fddc1257c83 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.927557417Z" level=info msg="createCtr: deleting container ea64bda413ffe4bf43dae710ca0af55cb5bf7537c29d07d52d6f7dc57d31729b from storage" id=6406fac4-1b44-4912-9cfd-8fddc1257c83 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.928799426Z" level=info msg="createCtr: deleting container ID e2f4b8a4b4eb69392834fbdf154cc4c03d0594e25846b955a947d26192dbeeb2 from idIndex" id=a270eb16-d817-4f6a-a2b8-ec941dc0bda5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.92883909Z" level=info msg="createCtr: removing container e2f4b8a4b4eb69392834fbdf154cc4c03d0594e25846b955a947d26192dbeeb2" id=a270eb16-d817-4f6a-a2b8-ec941dc0bda5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.92887498Z" level=info msg="createCtr: deleting container e2f4b8a4b4eb69392834fbdf154cc4c03d0594e25846b955a947d26192dbeeb2 from storage" id=a270eb16-d817-4f6a-a2b8-ec941dc0bda5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.930691085Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-422561_kube-system_6ecf19dd95945fcfeaff027fad95c1ee_0" id=6406fac4-1b44-4912-9cfd-8fddc1257c83 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.931071975Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-422561_kube-system_e643a03771f1e72f527532eff2c66a9c_0" id=a270eb16-d817-4f6a-a2b8-ec941dc0bda5 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:40:56.698704    3716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:40:56.699139    3716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:40:56.700643    3716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:40:56.701053    3716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:40:56.702507    3716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 3 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001870] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.374530] i8042: Warning: Keylock active
	[  +0.010846] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003424] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000658] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000637] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000691] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.479345] block sda: the capability attribute has been deprecated.
	[  +0.086934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025583] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.992810] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:40:56 up  1:23,  0 user,  load average: 0.67, 0.19, 0.11
	Linux ha-422561 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 03 18:40:48 ha-422561 kubelet[1961]: E1003 18:40:48.535582    1961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-422561?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 03 18:40:48 ha-422561 kubelet[1961]: I1003 18:40:48.695745    1961 kubelet_node_status.go:75] "Attempting to register node" node="ha-422561"
	Oct 03 18:40:48 ha-422561 kubelet[1961]: E1003 18:40:48.696172    1961 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-422561"
	Oct 03 18:40:49 ha-422561 kubelet[1961]: E1003 18:40:49.418760    1961 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.349401    1961 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-422561.186b0ef272ca351c  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-422561,UID:ha-422561,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-422561 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-422561,},FirstTimestamp:2025-10-03 18:35:13.889039644 +0000 UTC m=+0.583846472,LastTimestamp:2025-10-03 18:35:13.889039644 +0000 UTC m=+0.583846472,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-422561,}"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.895596    1961 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.895738    1961 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.916294    1961 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-422561\" not found"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.930954    1961 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:40:53 ha-422561 kubelet[1961]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:40:53 ha-422561 kubelet[1961]:  > podSandboxID="a859763ae69d997e72724d21d35d0ae86fcde7bd11468ef604f5a6d23f35b0f0"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.931068    1961 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:40:53 ha-422561 kubelet[1961]:         container kube-apiserver start failed in pod kube-apiserver-ha-422561_kube-system(6ecf19dd95945fcfeaff027fad95c1ee): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:40:53 ha-422561 kubelet[1961]:  > logger="UnhandledError"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.931108    1961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-422561" podUID="6ecf19dd95945fcfeaff027fad95c1ee"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.931305    1961 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:40:53 ha-422561 kubelet[1961]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:40:53 ha-422561 kubelet[1961]:  > podSandboxID="2bca45b92f4f55f540f80dd9d8d3d282362f7f0ecce2ac4786e27a3b4a9cfd4d"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.931391    1961 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:40:53 ha-422561 kubelet[1961]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-422561_kube-system(e643a03771f1e72f527532eff2c66a9c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:40:53 ha-422561 kubelet[1961]:  > logger="UnhandledError"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.932375    1961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-422561" podUID="e643a03771f1e72f527532eff2c66a9c"
	Oct 03 18:40:55 ha-422561 kubelet[1961]: E1003 18:40:55.536511    1961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-422561?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 03 18:40:55 ha-422561 kubelet[1961]: I1003 18:40:55.697332    1961 kubelet_node_status.go:75] "Attempting to register node" node="ha-422561"
	Oct 03 18:40:55 ha-422561 kubelet[1961]: E1003 18:40:55.697723    1961 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-422561"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561: exit status 6 (289.854751ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:40:57.064510   73761 status.go:458] kubeconfig endpoint: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-422561" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (1.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 status --output json --alsologtostderr -v 5: exit status 6 (290.20384ms)

                                                
                                                
-- stdout --
	{"Name":"ha-422561","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Misconfigured","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:40:57.134334   73872 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:40:57.134561   73872 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:40:57.134569   73872 out.go:374] Setting ErrFile to fd 2...
	I1003 18:40:57.134573   73872 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:40:57.134762   73872 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:40:57.134933   73872 out.go:368] Setting JSON to true
	I1003 18:40:57.134970   73872 mustload.go:65] Loading cluster: ha-422561
	I1003 18:40:57.135016   73872 notify.go:220] Checking for updates...
	I1003 18:40:57.135298   73872 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:40:57.135312   73872 status.go:174] checking status of ha-422561 ...
	I1003 18:40:57.135700   73872 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:40:57.156660   73872 status.go:371] ha-422561 host status = "Running" (err=<nil>)
	I1003 18:40:57.156679   73872 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:40:57.156883   73872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:40:57.172995   73872 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:40:57.173210   73872 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:40:57.173247   73872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:40:57.189439   73872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:40:57.286849   73872 ssh_runner.go:195] Run: systemctl --version
	I1003 18:40:57.292828   73872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:40:57.304114   73872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:40:57.355616   73872 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-03 18:40:57.345174741 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1003 18:40:57.356064   73872 status.go:458] kubeconfig endpoint: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:40:57.356090   73872 api_server.go:166] Checking apiserver status ...
	I1003 18:40:57.356121   73872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1003 18:40:57.365768   73872 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:40:57.365800   73872 status.go:463] ha-422561 apiserver status = Running (err=<nil>)
	I1003 18:40:57.365813   73872 status.go:176] ha-422561 status: &{Name:ha-422561 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:330: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-422561 status --output json --alsologtostderr -v 5" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-422561
helpers_test.go:243: (dbg) docker inspect ha-422561:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	        "Created": "2025-10-03T18:31:00.396132938Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 65481,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T18:31:00.428325646Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hostname",
	        "HostsPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hosts",
	        "LogPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512-json.log",
	        "Name": "/ha-422561",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-422561:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-422561",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	                "LowerDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94-init/diff:/var/lib/docker/overlay2/6a517a7375440eba803d7b83fe1e0821915758396dd4d8556ab64fff322a60c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-422561",
	                "Source": "/var/lib/docker/volumes/ha-422561/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-422561",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-422561",
	                "name.minikube.sigs.k8s.io": "ha-422561",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3084976d568ce061948ebe671f279a80502b1d28417f2be7c2497961eac2a5aa",
	            "SandboxKey": "/var/run/docker/netns/3084976d568c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-422561": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:e4:3c:eb:d3:38",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de6aa7ca29f453c0d15cb280abde7ee215f554c89e78e3db8a0f7590468114b5",
	                    "EndpointID": "1b961733d045b77a64efb8afa6caa273125f56ec888f823b790f5454f23ca3b7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-422561",
	                        "eef8fc426b2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561: exit status 6 (285.503765ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:40:57.660759   73995 status.go:458] kubeconfig endpoint: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-889240 ssh pgrep buildkitd                                                                           │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ image   │ functional-889240 image ls --format json --alsologtostderr                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image   │ functional-889240 image ls --format table --alsologtostderr                                                     │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image   │ functional-889240 image build -t localhost/my-image:functional-889240 testdata/build --alsologtostderr          │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:27 UTC │
	│ image   │ functional-889240 image ls                                                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:27 UTC │ 03 Oct 25 18:27 UTC │
	│ delete  │ -p functional-889240                                                                                            │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:30 UTC │ 03 Oct 25 18:30 UTC │
	│ start   │ ha-422561 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:30 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- rollout status deployment/busybox                                                          │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node add --alsologtostderr -v 5                                                                       │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:30:55
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:30:55.351405   64909 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:30:55.351662   64909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:55.351671   64909 out.go:374] Setting ErrFile to fd 2...
	I1003 18:30:55.351675   64909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:55.351854   64909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:30:55.352339   64909 out.go:368] Setting JSON to false
	I1003 18:30:55.353203   64909 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4406,"bootTime":1759511849,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:30:55.353289   64909 start.go:140] virtualization: kvm guest
	I1003 18:30:55.355458   64909 out.go:179] * [ha-422561] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 18:30:55.356815   64909 notify.go:220] Checking for updates...
	I1003 18:30:55.356884   64909 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:30:55.358389   64909 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:30:55.359964   64909 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:30:55.361351   64909 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:30:55.362647   64909 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:30:55.363956   64909 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:30:55.365351   64909 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:30:55.387768   64909 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:30:55.387885   64909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:30:55.443407   64909 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:30:55.433728571 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:30:55.443516   64909 docker.go:318] overlay module found
	I1003 18:30:55.445440   64909 out.go:179] * Using the docker driver based on user configuration
	I1003 18:30:55.446777   64909 start.go:304] selected driver: docker
	I1003 18:30:55.446793   64909 start.go:924] validating driver "docker" against <nil>
	I1003 18:30:55.446808   64909 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:30:55.447403   64909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:30:55.498777   64909 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:30:55.489521827 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:30:55.498958   64909 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1003 18:30:55.499206   64909 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:30:55.501187   64909 out.go:179] * Using Docker driver with root privileges
	I1003 18:30:55.502312   64909 cni.go:84] Creating CNI manager for ""
	I1003 18:30:55.502386   64909 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1003 18:30:55.502397   64909 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 18:30:55.502459   64909 start.go:348] cluster config:
	{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1003 18:30:55.503779   64909 out.go:179] * Starting "ha-422561" primary control-plane node in "ha-422561" cluster
	I1003 18:30:55.504816   64909 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:30:55.506028   64909 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:30:55.507131   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:30:55.507167   64909 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 18:30:55.507169   64909 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:30:55.507175   64909 cache.go:58] Caching tarball of preloaded images
	I1003 18:30:55.507294   64909 preload.go:233] Found /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1003 18:30:55.507311   64909 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 18:30:55.507736   64909 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:30:55.507764   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json: {Name:mk1ece959bac74a473416f0dfc8af04a6136d7b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:30:55.527458   64909 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 18:30:55.527478   64909 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 18:30:55.527494   64909 cache.go:232] Successfully downloaded all kic artifacts
	I1003 18:30:55.527527   64909 start.go:360] acquireMachinesLock for ha-422561: {Name:mk32fd04a5d9b5f89831583bab7d7527f4d187a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:30:55.527631   64909 start.go:364] duration metric: took 81.336µs to acquireMachinesLock for "ha-422561"
	I1003 18:30:55.527657   64909 start.go:93] Provisioning new machine with config: &{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 18:30:55.527748   64909 start.go:125] createHost starting for "" (driver="docker")
	I1003 18:30:55.529663   64909 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1003 18:30:55.529898   64909 start.go:159] libmachine.API.Create for "ha-422561" (driver="docker")
	I1003 18:30:55.529933   64909 client.go:168] LocalClient.Create starting
	I1003 18:30:55.530028   64909 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem
	I1003 18:30:55.530072   64909 main.go:141] libmachine: Decoding PEM data...
	I1003 18:30:55.530097   64909 main.go:141] libmachine: Parsing certificate...
	I1003 18:30:55.530187   64909 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem
	I1003 18:30:55.530226   64909 main.go:141] libmachine: Decoding PEM data...
	I1003 18:30:55.530238   64909 main.go:141] libmachine: Parsing certificate...
	I1003 18:30:55.530612   64909 cli_runner.go:164] Run: docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 18:30:55.547068   64909 cli_runner.go:211] docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 18:30:55.547129   64909 network_create.go:284] running [docker network inspect ha-422561] to gather additional debugging logs...
	I1003 18:30:55.547146   64909 cli_runner.go:164] Run: docker network inspect ha-422561
	W1003 18:30:55.563141   64909 cli_runner.go:211] docker network inspect ha-422561 returned with exit code 1
	I1003 18:30:55.563167   64909 network_create.go:287] error running [docker network inspect ha-422561]: docker network inspect ha-422561: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-422561 not found
	I1003 18:30:55.563179   64909 network_create.go:289] output of [docker network inspect ha-422561]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-422561 not found
	
	** /stderr **
	I1003 18:30:55.563276   64909 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:30:55.579301   64909 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00157b3a0}
	I1003 18:30:55.579336   64909 network_create.go:124] attempt to create docker network ha-422561 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1003 18:30:55.579388   64909 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-422561 ha-422561
	I1003 18:30:55.634233   64909 network_create.go:108] docker network ha-422561 192.168.49.0/24 created
	I1003 18:30:55.634260   64909 kic.go:121] calculated static IP "192.168.49.2" for the "ha-422561" container
	I1003 18:30:55.634318   64909 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 18:30:55.649960   64909 cli_runner.go:164] Run: docker volume create ha-422561 --label name.minikube.sigs.k8s.io=ha-422561 --label created_by.minikube.sigs.k8s.io=true
	I1003 18:30:55.667186   64909 oci.go:103] Successfully created a docker volume ha-422561
	I1003 18:30:55.667250   64909 cli_runner.go:164] Run: docker run --rm --name ha-422561-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-422561 --entrypoint /usr/bin/test -v ha-422561:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1003 18:30:56.041615   64909 oci.go:107] Successfully prepared a docker volume ha-422561
	I1003 18:30:56.041648   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:30:56.041669   64909 kic.go:194] Starting extracting preloaded images to volume ...
	I1003 18:30:56.041727   64909 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-422561:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1003 18:31:00.326417   64909 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-422561:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.284654466s)
	I1003 18:31:00.326457   64909 kic.go:203] duration metric: took 4.284784967s to extract preloaded images to volume ...
	W1003 18:31:00.326567   64909 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1003 18:31:00.326610   64909 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1003 18:31:00.326657   64909 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1003 18:31:00.381592   64909 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-422561 --name ha-422561 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-422561 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-422561 --network ha-422561 --ip 192.168.49.2 --volume ha-422561:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1003 18:31:00.641348   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Running}}
	I1003 18:31:00.659876   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:00.678319   64909 cli_runner.go:164] Run: docker exec ha-422561 stat /var/lib/dpkg/alternatives/iptables
	I1003 18:31:00.728414   64909 oci.go:144] the created container "ha-422561" has a running status.
	I1003 18:31:00.728450   64909 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa...
	I1003 18:31:01.103610   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1003 18:31:01.103663   64909 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1003 18:31:01.128670   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:01.147200   64909 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1003 18:31:01.147218   64909 kic_runner.go:114] Args: [docker exec --privileged ha-422561 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1003 18:31:01.189023   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:01.207395   64909 machine.go:93] provisionDockerMachine start ...
	I1003 18:31:01.207497   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.226029   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.226282   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.226299   64909 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 18:31:01.372245   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:31:01.372275   64909 ubuntu.go:182] provisioning hostname "ha-422561"
	I1003 18:31:01.372335   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.390674   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.390889   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.390902   64909 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-422561 && echo "ha-422561" | sudo tee /etc/hostname
	I1003 18:31:01.544850   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:31:01.544932   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.563695   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.563966   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.564014   64909 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422561' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422561/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422561' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:31:01.708942   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:31:01.708971   64909 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8669/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8669/.minikube}
	I1003 18:31:01.709036   64909 ubuntu.go:190] setting up certificates
	I1003 18:31:01.709048   64909 provision.go:84] configureAuth start
	I1003 18:31:01.709101   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:01.727778   64909 provision.go:143] copyHostCerts
	I1003 18:31:01.727814   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:31:01.727849   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem, removing ...
	I1003 18:31:01.727858   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:31:01.727940   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem (1082 bytes)
	I1003 18:31:01.728054   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:31:01.728079   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem, removing ...
	I1003 18:31:01.728090   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:31:01.728137   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem (1123 bytes)
	I1003 18:31:01.728200   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:31:01.728225   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem, removing ...
	I1003 18:31:01.728234   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:31:01.728266   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem (1675 bytes)
	I1003 18:31:01.728336   64909 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem org=jenkins.ha-422561 san=[127.0.0.1 192.168.49.2 ha-422561 localhost minikube]
	I1003 18:31:01.864219   64909 provision.go:177] copyRemoteCerts
	I1003 18:31:01.864281   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:31:01.864317   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.882069   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:01.982800   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 18:31:01.982877   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1003 18:31:02.000887   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 18:31:02.000952   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 18:31:02.017591   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 18:31:02.017639   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 18:31:02.034172   64909 provision.go:87] duration metric: took 325.10989ms to configureAuth
	I1003 18:31:02.034202   64909 ubuntu.go:206] setting minikube options for container-runtime
	I1003 18:31:02.034393   64909 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:31:02.034508   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.052111   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:02.052326   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:02.052344   64909 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 18:31:02.295594   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 18:31:02.295629   64909 machine.go:96] duration metric: took 1.088207423s to provisionDockerMachine
	I1003 18:31:02.295640   64909 client.go:171] duration metric: took 6.765697238s to LocalClient.Create
	I1003 18:31:02.295660   64909 start.go:167] duration metric: took 6.765761646s to libmachine.API.Create "ha-422561"
	I1003 18:31:02.295669   64909 start.go:293] postStartSetup for "ha-422561" (driver="docker")
	I1003 18:31:02.295682   64909 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:31:02.295752   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:31:02.295789   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.312783   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.414720   64909 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:31:02.418127   64909 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:31:02.418149   64909 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 18:31:02.418159   64909 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/addons for local assets ...
	I1003 18:31:02.418213   64909 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/files for local assets ...
	I1003 18:31:02.418310   64909 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> 122122.pem in /etc/ssl/certs
	I1003 18:31:02.418326   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /etc/ssl/certs/122122.pem
	I1003 18:31:02.418453   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 18:31:02.425623   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:31:02.444405   64909 start.go:296] duration metric: took 148.722871ms for postStartSetup
	I1003 18:31:02.444748   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:02.462226   64909 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:31:02.462456   64909 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:31:02.462495   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.478737   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.575846   64909 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:31:02.580138   64909 start.go:128] duration metric: took 7.052376255s to createHost
	I1003 18:31:02.580160   64909 start.go:83] releasing machines lock for "ha-422561", held for 7.052515614s
	I1003 18:31:02.580230   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:02.596730   64909 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:31:02.596776   64909 ssh_runner.go:195] Run: cat /version.json
	I1003 18:31:02.596798   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.596817   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.613783   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.614183   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.764865   64909 ssh_runner.go:195] Run: systemctl --version
	I1003 18:31:02.771251   64909 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 18:31:02.803643   64909 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 18:31:02.807949   64909 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 18:31:02.808044   64909 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 18:31:02.833024   64909 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 18:31:02.833043   64909 start.go:495] detecting cgroup driver to use...
	I1003 18:31:02.833073   64909 detect.go:190] detected "systemd" cgroup driver on host os
	I1003 18:31:02.833108   64909 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 18:31:02.847613   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 18:31:02.858865   64909 docker.go:218] disabling cri-docker service (if available) ...
	I1003 18:31:02.858910   64909 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 18:31:02.874470   64909 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 18:31:02.890554   64909 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 18:31:02.970342   64909 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 18:31:03.055310   64909 docker.go:234] disabling docker service ...
	I1003 18:31:03.055369   64909 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 18:31:03.072668   64909 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 18:31:03.084308   64909 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 18:31:03.163959   64909 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 18:31:03.241930   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 18:31:03.253863   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:31:03.266905   64909 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 18:31:03.266971   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.276795   64909 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 18:31:03.276848   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.285157   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.293117   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.301070   64909 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:31:03.308489   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.316789   64909 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.329424   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.337651   64909 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:31:03.344839   64909 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:31:03.352026   64909 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:31:03.430894   64909 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 18:31:03.533915   64909 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 18:31:03.534002   64909 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 18:31:03.537783   64909 start.go:563] Will wait 60s for crictl version
	I1003 18:31:03.537838   64909 ssh_runner.go:195] Run: which crictl
	I1003 18:31:03.541393   64909 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 18:31:03.564883   64909 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 18:31:03.564963   64909 ssh_runner.go:195] Run: crio --version
	I1003 18:31:03.591363   64909 ssh_runner.go:195] Run: crio --version
	I1003 18:31:03.619425   64909 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 18:31:03.620466   64909 cli_runner.go:164] Run: docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:31:03.637151   64909 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 18:31:03.641184   64909 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:31:03.651292   64909 kubeadm.go:883] updating cluster {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 18:31:03.651379   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:31:03.651428   64909 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:31:03.680883   64909 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:31:03.680904   64909 crio.go:433] Images already preloaded, skipping extraction
	I1003 18:31:03.680955   64909 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:31:03.706829   64909 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:31:03.706859   64909 cache_images.go:85] Images are preloaded, skipping loading
	I1003 18:31:03.706866   64909 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1003 18:31:03.706953   64909 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422561 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 18:31:03.707032   64909 ssh_runner.go:195] Run: crio config
	I1003 18:31:03.751501   64909 cni.go:84] Creating CNI manager for ""
	I1003 18:31:03.751523   64909 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 18:31:03.751538   64909 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 18:31:03.751558   64909 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-422561 NodeName:ha-422561 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 18:31:03.751669   64909 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-422561"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:31:03.751691   64909 kube-vip.go:115] generating kube-vip config ...
	I1003 18:31:03.751728   64909 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1003 18:31:03.763009   64909 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:31:03.763125   64909 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1003 18:31:03.763181   64909 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 18:31:03.770585   64909 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:31:03.770633   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1003 18:31:03.778069   64909 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1003 18:31:03.790397   64909 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 18:31:03.805112   64909 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1003 18:31:03.817362   64909 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1003 18:31:03.830824   64909 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1003 18:31:03.834300   64909 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:31:03.843861   64909 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:31:03.921407   64909 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:31:03.944431   64909 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561 for IP: 192.168.49.2
	I1003 18:31:03.944451   64909 certs.go:195] generating shared ca certs ...
	I1003 18:31:03.944468   64909 certs.go:227] acquiring lock for ca certs: {Name:mk92d1e8e469cb44d9924ff8abf5ecf0a8ce4e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:03.944607   64909 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key
	I1003 18:31:03.944644   64909 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key
	I1003 18:31:03.944652   64909 certs.go:257] generating profile certs ...
	I1003 18:31:03.944708   64909 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key
	I1003 18:31:03.944722   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt with IP's: []
	I1003 18:31:04.171087   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt ...
	I1003 18:31:04.171118   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt: {Name:mked6cb0f731cbb630d2b187c4975015a458a284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.171291   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key ...
	I1003 18:31:04.171301   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key: {Name:mk0c9f0a0941d99f2af213cd316467f053532c99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.171391   64909 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905
	I1003 18:31:04.171406   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1003 18:31:04.383185   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 ...
	I1003 18:31:04.383218   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905: {Name:mkc24c55d4abb428b3559a93e6e301be2cab703a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.383381   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905 ...
	I1003 18:31:04.383394   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905: {Name:mk0576a73623089a3eecf4e34bbbd214545e2247 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.383486   64909 certs.go:382] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt
	I1003 18:31:04.383601   64909 certs.go:386] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905 -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key
	I1003 18:31:04.383674   64909 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key
	I1003 18:31:04.383689   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt with IP's: []
	I1003 18:31:04.628083   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt ...
	I1003 18:31:04.628112   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt: {Name:mkc19179c67a2559968759165df93d304eb42db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.628269   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key ...
	I1003 18:31:04.628279   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key: {Name:mka8b2392a3d721a70329b852837f3403643f948 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.628347   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 18:31:04.628364   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 18:31:04.628375   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 18:31:04.628384   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 18:31:04.628397   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 18:31:04.628410   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 18:31:04.628430   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 18:31:04.628442   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 18:31:04.628492   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem (1338 bytes)
	W1003 18:31:04.628525   64909 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212_empty.pem, impossibly tiny 0 bytes
	I1003 18:31:04.628535   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:31:04.628558   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:31:04.628580   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:31:04.628601   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem (1675 bytes)
	I1003 18:31:04.628637   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:31:04.628666   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem -> /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.628680   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.628692   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.629254   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:31:04.646879   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 18:31:04.663465   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:31:04.679837   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:31:04.695959   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 18:31:04.712689   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:31:04.729310   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:31:04.745587   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 18:31:04.761663   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem --> /usr/share/ca-certificates/12212.pem (1338 bytes)
	I1003 18:31:04.779546   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /usr/share/ca-certificates/122122.pem (1708 bytes)
	I1003 18:31:04.796119   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:31:04.813748   64909 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:31:04.826629   64909 ssh_runner.go:195] Run: openssl version
	I1003 18:31:04.832848   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122122.pem && ln -fs /usr/share/ca-certificates/122122.pem /etc/ssl/certs/122122.pem"
	I1003 18:31:04.840960   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.844465   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.844506   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.878276   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122122.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:31:04.886714   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:31:04.894672   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.898099   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.898154   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.931606   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:31:04.940357   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12212.pem && ln -fs /usr/share/ca-certificates/12212.pem /etc/ssl/certs/12212.pem"
	I1003 18:31:04.948454   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.952097   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.952148   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.985741   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12212.pem /etc/ssl/certs/51391683.0"
	I1003 18:31:04.994005   64909 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:31:04.997322   64909 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 18:31:04.997379   64909 kubeadm.go:400] StartCluster: {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:31:04.997476   64909 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:31:04.997539   64909 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:31:05.022530   64909 cri.go:89] found id: ""
	I1003 18:31:05.022595   64909 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:31:05.030329   64909 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 18:31:05.037782   64909 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:31:05.037841   64909 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:31:05.045127   64909 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:31:05.045142   64909 kubeadm.go:157] found existing configuration files:
	
	I1003 18:31:05.045174   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 18:31:05.052235   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:31:05.052286   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:31:05.059062   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 18:31:05.066034   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:31:05.066081   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:31:05.072912   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 18:31:05.079906   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:31:05.079966   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:31:05.086575   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 18:31:05.093500   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:31:05.093559   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:31:05.100246   64909 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:31:05.136174   64909 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:31:05.136254   64909 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:31:05.156320   64909 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:31:05.156407   64909 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 18:31:05.156462   64909 kubeadm.go:318] OS: Linux
	I1003 18:31:05.156539   64909 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:31:05.156610   64909 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:31:05.156705   64909 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:31:05.156790   64909 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:31:05.156865   64909 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:31:05.156939   64909 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:31:05.157035   64909 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:31:05.157127   64909 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 18:31:05.210250   64909 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:31:05.210408   64909 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:31:05.210566   64909 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:31:05.217643   64909 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:31:05.219725   64909 out.go:252]   - Generating certificates and keys ...
	I1003 18:31:05.219828   64909 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:31:05.219943   64909 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:31:05.398135   64909 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 18:31:05.511875   64909 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1003 18:31:05.863575   64909 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1003 18:31:06.044823   64909 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1003 18:31:06.083505   64909 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1003 18:31:06.083616   64909 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 18:31:06.181464   64909 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1003 18:31:06.181591   64909 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 18:31:06.345813   64909 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 18:31:06.565989   64909 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 18:31:06.759809   64909 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1003 18:31:06.759892   64909 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:31:06.883072   64909 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:31:07.211268   64909 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:31:07.403076   64909 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:31:07.687412   64909 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:31:08.052476   64909 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:31:08.052957   64909 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:31:08.054984   64909 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:31:08.056889   64909 out.go:252]   - Booting up control plane ...
	I1003 18:31:08.056984   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:31:08.057047   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:31:08.057102   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:31:08.069846   64909 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:31:08.069954   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:31:08.077490   64909 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:31:08.077826   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:31:08.077870   64909 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:31:08.170750   64909 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:31:08.170893   64909 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:31:09.172507   64909 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001794723s
	I1003 18:31:09.175233   64909 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:31:09.175335   64909 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1003 18:31:09.175418   64909 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:31:09.175496   64909 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:35:09.177158   64909 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001064557s
	I1003 18:35:09.177466   64909 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001283425s
	I1003 18:35:09.177673   64909 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00125879s
	I1003 18:35:09.177731   64909 kubeadm.go:318] 
	I1003 18:35:09.177887   64909 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 18:35:09.178114   64909 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:35:09.178320   64909 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 18:35:09.178580   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 18:35:09.178818   64909 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 18:35:09.179017   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 18:35:09.179033   64909 kubeadm.go:318] 
	I1003 18:35:09.182028   64909 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 18:35:09.182304   64909 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:35:09.182918   64909 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1003 18:35:09.183015   64909 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1003 18:35:09.183174   64909 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001794723s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001064557s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001283425s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00125879s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1003 18:35:09.183243   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1003 18:35:11.953646   64909 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.770379999s)
	I1003 18:35:11.953721   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:35:11.965876   64909 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:35:11.965928   64909 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:35:11.973363   64909 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:35:11.973382   64909 kubeadm.go:157] found existing configuration files:
	
	I1003 18:35:11.973419   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 18:35:11.980752   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:35:11.980806   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:35:11.987857   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 18:35:11.995081   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:35:11.995127   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:35:12.001778   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 18:35:12.009063   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:35:12.009126   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:35:12.015927   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 18:35:12.022875   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:35:12.022943   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:35:12.029549   64909 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:35:12.082477   64909 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 18:35:12.138594   64909 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:39:14.312592   64909 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	I1003 18:39:14.312818   64909 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1003 18:39:14.315914   64909 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:39:14.315992   64909 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:39:14.316115   64909 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:39:14.316166   64909 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 18:39:14.316250   64909 kubeadm.go:318] OS: Linux
	I1003 18:39:14.316328   64909 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:39:14.316401   64909 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:39:14.316475   64909 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:39:14.316553   64909 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:39:14.316624   64909 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:39:14.316701   64909 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:39:14.316751   64909 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:39:14.316825   64909 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 18:39:14.316936   64909 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:39:14.317123   64909 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:39:14.317262   64909 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:39:14.317314   64909 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:39:14.319872   64909 out.go:252]   - Generating certificates and keys ...
	I1003 18:39:14.319940   64909 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:39:14.320033   64909 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:39:14.320122   64909 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1003 18:39:14.320186   64909 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1003 18:39:14.320253   64909 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1003 18:39:14.320299   64909 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1003 18:39:14.320350   64909 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1003 18:39:14.320420   64909 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1003 18:39:14.320509   64909 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1003 18:39:14.320604   64909 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1003 18:39:14.320671   64909 kubeadm.go:318] [certs] Using the existing "sa" key
	I1003 18:39:14.320751   64909 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:39:14.320828   64909 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:39:14.320904   64909 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:39:14.321006   64909 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:39:14.321096   64909 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:39:14.321174   64909 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:39:14.321279   64909 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:39:14.321373   64909 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:39:14.322793   64909 out.go:252]   - Booting up control plane ...
	I1003 18:39:14.322884   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:39:14.323004   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:39:14.323072   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:39:14.323162   64909 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:39:14.323237   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:39:14.323335   64909 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:39:14.323415   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:39:14.323456   64909 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:39:14.323557   64909 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:39:14.323652   64909 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:39:14.323702   64909 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001540709s
	I1003 18:39:14.323792   64909 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:39:14.323860   64909 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1003 18:39:14.323946   64909 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:39:14.324043   64909 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:39:14.324124   64909 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	I1003 18:39:14.324186   64909 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	I1003 18:39:14.324248   64909 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	I1003 18:39:14.324258   64909 kubeadm.go:318] 
	I1003 18:39:14.324352   64909 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 18:39:14.324439   64909 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:39:14.324519   64909 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 18:39:14.324595   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 18:39:14.324687   64909 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 18:39:14.324773   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 18:39:14.324799   64909 kubeadm.go:318] 
	I1003 18:39:14.324836   64909 kubeadm.go:402] duration metric: took 8m9.327461574s to StartCluster
	I1003 18:39:14.324877   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:39:14.324935   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:39:14.352551   64909 cri.go:89] found id: ""
	I1003 18:39:14.352594   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.352608   64909 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:39:14.352617   64909 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:39:14.352684   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:39:14.376604   64909 cri.go:89] found id: ""
	I1003 18:39:14.376629   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.376638   64909 logs.go:284] No container was found matching "etcd"
	I1003 18:39:14.376643   64909 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:39:14.376750   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:39:14.401480   64909 cri.go:89] found id: ""
	I1003 18:39:14.401504   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.401512   64909 logs.go:284] No container was found matching "coredns"
	I1003 18:39:14.401517   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:39:14.401582   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:39:14.426822   64909 cri.go:89] found id: ""
	I1003 18:39:14.426858   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.426871   64909 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:39:14.426879   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:39:14.426946   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:39:14.451679   64909 cri.go:89] found id: ""
	I1003 18:39:14.451710   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.451722   64909 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:39:14.451730   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:39:14.451787   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:39:14.477253   64909 cri.go:89] found id: ""
	I1003 18:39:14.477275   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.477282   64909 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:39:14.477288   64909 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:39:14.477332   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:39:14.501586   64909 cri.go:89] found id: ""
	I1003 18:39:14.501613   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.501621   64909 logs.go:284] No container was found matching "kindnet"
	I1003 18:39:14.501632   64909 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:39:14.501643   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:39:14.561285   64909 logs.go:123] Gathering logs for container status ...
	I1003 18:39:14.561318   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:39:14.589589   64909 logs.go:123] Gathering logs for kubelet ...
	I1003 18:39:14.589614   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:39:14.656775   64909 logs.go:123] Gathering logs for dmesg ...
	I1003 18:39:14.656809   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:39:14.668000   64909 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:39:14.668023   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:39:14.725446   64909 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:39:14.718419    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.718941    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720510    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720909    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.722416    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:39:14.718419    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.718941    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720510    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720909    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.722416    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1003 18:39:14.725478   64909 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	W1003 18:39:14.725530   64909 out.go:285] * 
	W1003 18:39:14.725612   64909 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:39:14.725629   64909 out.go:285] * 
	W1003 18:39:14.727399   64909 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:39:14.731087   64909 out.go:203] 
	W1003 18:39:14.732560   64909 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:39:14.732585   64909 out.go:285] * 
	I1003 18:39:14.734183   64909 out.go:203] 
	
	
	==> CRI-O <==
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.920098966Z" level=info msg="createCtr: removing container 60eac4f05bb70cc097a023480fc9d2f45ed0628f63763a71867879f1fd5fa153" id=8ee50b88-f594-4d65-81a3-5ff4b08ba0ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.920129084Z" level=info msg="createCtr: deleting container 60eac4f05bb70cc097a023480fc9d2f45ed0628f63763a71867879f1fd5fa153 from storage" id=8ee50b88-f594-4d65-81a3-5ff4b08ba0ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.922274937Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-422561_kube-system_6803106e6cb30e1b9b282ce29772fddf_0" id=8ee50b88-f594-4d65-81a3-5ff4b08ba0ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.895966159Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=5d5ebf70-cac3-422d-8424-70692dea829d name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.896076791Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=49f23445-fb1d-4650-aca8-7186c3d76e4e name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.89680709Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=1ea2be83-eafe-478b-86f5-ff2b9b2e9177 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.896872043Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=c4d1f694-5260-4074-8a93-156d0e025c5f name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.897665559Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-422561/kube-apiserver" id=6406fac4-1b44-4912-9cfd-8fddc1257c83 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.897818486Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-422561/kube-controller-manager" id=a270eb16-d817-4f6a-a2b8-ec941dc0bda5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.897895229Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.898053794Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.903279482Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.903713949Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.905147304Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.906535458Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.925936651Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=6406fac4-1b44-4912-9cfd-8fddc1257c83 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.927378667Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=a270eb16-d817-4f6a-a2b8-ec941dc0bda5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.927499752Z" level=info msg="createCtr: deleting container ID ea64bda413ffe4bf43dae710ca0af55cb5bf7537c29d07d52d6f7dc57d31729b from idIndex" id=6406fac4-1b44-4912-9cfd-8fddc1257c83 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.927528924Z" level=info msg="createCtr: removing container ea64bda413ffe4bf43dae710ca0af55cb5bf7537c29d07d52d6f7dc57d31729b" id=6406fac4-1b44-4912-9cfd-8fddc1257c83 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.927557417Z" level=info msg="createCtr: deleting container ea64bda413ffe4bf43dae710ca0af55cb5bf7537c29d07d52d6f7dc57d31729b from storage" id=6406fac4-1b44-4912-9cfd-8fddc1257c83 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.928799426Z" level=info msg="createCtr: deleting container ID e2f4b8a4b4eb69392834fbdf154cc4c03d0594e25846b955a947d26192dbeeb2 from idIndex" id=a270eb16-d817-4f6a-a2b8-ec941dc0bda5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.92883909Z" level=info msg="createCtr: removing container e2f4b8a4b4eb69392834fbdf154cc4c03d0594e25846b955a947d26192dbeeb2" id=a270eb16-d817-4f6a-a2b8-ec941dc0bda5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.92887498Z" level=info msg="createCtr: deleting container e2f4b8a4b4eb69392834fbdf154cc4c03d0594e25846b955a947d26192dbeeb2 from storage" id=a270eb16-d817-4f6a-a2b8-ec941dc0bda5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.930691085Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-422561_kube-system_6ecf19dd95945fcfeaff027fad95c1ee_0" id=6406fac4-1b44-4912-9cfd-8fddc1257c83 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.931071975Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-422561_kube-system_e643a03771f1e72f527532eff2c66a9c_0" id=a270eb16-d817-4f6a-a2b8-ec941dc0bda5 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:40:58.220796    3885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:40:58.221861    3885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:40:58.223420    3885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:40:58.223842    3885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:40:58.225352    3885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 3 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001870] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.374530] i8042: Warning: Keylock active
	[  +0.010846] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003424] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000658] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000637] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000691] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.479345] block sda: the capability attribute has been deprecated.
	[  +0.086934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025583] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.992810] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:40:58 up  1:23,  0 user,  load average: 0.67, 0.19, 0.11
	Linux ha-422561 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 03 18:40:48 ha-422561 kubelet[1961]: E1003 18:40:48.535582    1961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-422561?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 03 18:40:48 ha-422561 kubelet[1961]: I1003 18:40:48.695745    1961 kubelet_node_status.go:75] "Attempting to register node" node="ha-422561"
	Oct 03 18:40:48 ha-422561 kubelet[1961]: E1003 18:40:48.696172    1961 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-422561"
	Oct 03 18:40:49 ha-422561 kubelet[1961]: E1003 18:40:49.418760    1961 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.349401    1961 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-422561.186b0ef272ca351c  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-422561,UID:ha-422561,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-422561 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-422561,},FirstTimestamp:2025-10-03 18:35:13.889039644 +0000 UTC m=+0.583846472,LastTimestamp:2025-10-03 18:35:13.889039644 +0000 UTC m=+0.583846472,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-422561,}"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.895596    1961 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.895738    1961 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.916294    1961 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-422561\" not found"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.930954    1961 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:40:53 ha-422561 kubelet[1961]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:40:53 ha-422561 kubelet[1961]:  > podSandboxID="a859763ae69d997e72724d21d35d0ae86fcde7bd11468ef604f5a6d23f35b0f0"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.931068    1961 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:40:53 ha-422561 kubelet[1961]:         container kube-apiserver start failed in pod kube-apiserver-ha-422561_kube-system(6ecf19dd95945fcfeaff027fad95c1ee): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:40:53 ha-422561 kubelet[1961]:  > logger="UnhandledError"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.931108    1961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-422561" podUID="6ecf19dd95945fcfeaff027fad95c1ee"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.931305    1961 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:40:53 ha-422561 kubelet[1961]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:40:53 ha-422561 kubelet[1961]:  > podSandboxID="2bca45b92f4f55f540f80dd9d8d3d282362f7f0ecce2ac4786e27a3b4a9cfd4d"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.931391    1961 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:40:53 ha-422561 kubelet[1961]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-422561_kube-system(e643a03771f1e72f527532eff2c66a9c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:40:53 ha-422561 kubelet[1961]:  > logger="UnhandledError"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.932375    1961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-422561" podUID="e643a03771f1e72f527532eff2c66a9c"
	Oct 03 18:40:55 ha-422561 kubelet[1961]: E1003 18:40:55.536511    1961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-422561?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 03 18:40:55 ha-422561 kubelet[1961]: I1003 18:40:55.697332    1961 kubelet_node_status.go:75] "Attempting to register node" node="ha-422561"
	Oct 03 18:40:55 ha-422561 kubelet[1961]: E1003 18:40:55.697723    1961 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-422561"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561: exit status 6 (289.445482ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:40:58.587903   74330 status.go:458] kubeconfig endpoint: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-422561" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (1.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (1.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 node stop m02 --alsologtostderr -v 5: exit status 85 (64.064697ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:40:58.653269   74444 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:40:58.653529   74444 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:40:58.653537   74444 out.go:374] Setting ErrFile to fd 2...
	I1003 18:40:58.653541   74444 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:40:58.653714   74444 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:40:58.653959   74444 mustload.go:65] Loading cluster: ha-422561
	I1003 18:40:58.654258   74444 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:40:58.656235   74444 out.go:203] 
	W1003 18:40:58.657383   74444 out.go:285] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1003 18:40:58.657395   74444 out.go:285] * 
	* 
	W1003 18:40:58.660524   74444 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:40:58.661872   74444 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-422561 node stop m02 --alsologtostderr -v 5": exit status 85
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 status --alsologtostderr -v 5: exit status 6 (285.54808ms)

                                                
                                                
-- stdout --
	ha-422561
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:40:58.717943   74455 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:40:58.718198   74455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:40:58.718206   74455 out.go:374] Setting ErrFile to fd 2...
	I1003 18:40:58.718210   74455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:40:58.718404   74455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:40:58.718549   74455 out.go:368] Setting JSON to false
	I1003 18:40:58.718574   74455 mustload.go:65] Loading cluster: ha-422561
	I1003 18:40:58.718620   74455 notify.go:220] Checking for updates...
	I1003 18:40:58.718886   74455 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:40:58.718899   74455 status.go:174] checking status of ha-422561 ...
	I1003 18:40:58.719331   74455 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:40:58.737912   74455 status.go:371] ha-422561 host status = "Running" (err=<nil>)
	I1003 18:40:58.737954   74455 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:40:58.738238   74455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:40:58.754544   74455 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:40:58.754778   74455 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:40:58.754815   74455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:40:58.771511   74455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:40:58.868835   74455 ssh_runner.go:195] Run: systemctl --version
	I1003 18:40:58.874799   74455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:40:58.886632   74455 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:40:58.938037   74455 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-03 18:40:58.928372911 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1003 18:40:58.938436   74455 status.go:458] kubeconfig endpoint: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:40:58.938458   74455 api_server.go:166] Checking apiserver status ...
	I1003 18:40:58.938486   74455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1003 18:40:58.948271   74455 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:40:58.948300   74455 status.go:463] ha-422561 apiserver status = Running (err=<nil>)
	I1003 18:40:58.948310   74455 status.go:176] ha-422561 status: &{Name:ha-422561 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:374: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-422561 status --alsologtostderr -v 5" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-422561
helpers_test.go:243: (dbg) docker inspect ha-422561:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	        "Created": "2025-10-03T18:31:00.396132938Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 65481,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T18:31:00.428325646Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hostname",
	        "HostsPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hosts",
	        "LogPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512-json.log",
	        "Name": "/ha-422561",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-422561:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-422561",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	                "LowerDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94-init/diff:/var/lib/docker/overlay2/6a517a7375440eba803d7b83fe1e0821915758396dd4d8556ab64fff322a60c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-422561",
	                "Source": "/var/lib/docker/volumes/ha-422561/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-422561",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-422561",
	                "name.minikube.sigs.k8s.io": "ha-422561",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3084976d568ce061948ebe671f279a80502b1d28417f2be7c2497961eac2a5aa",
	            "SandboxKey": "/var/run/docker/netns/3084976d568c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-422561": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:e4:3c:eb:d3:38",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de6aa7ca29f453c0d15cb280abde7ee215f554c89e78e3db8a0f7590468114b5",
	                    "EndpointID": "1b961733d045b77a64efb8afa6caa273125f56ec888f823b790f5454f23ca3b7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-422561",
	                        "eef8fc426b2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561: exit status 6 (287.895833ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:40:59.243940   74581 status.go:458] kubeconfig endpoint: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-889240 image ls --format json --alsologtostderr                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image   │ functional-889240 image ls --format table --alsologtostderr                                                     │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image   │ functional-889240 image build -t localhost/my-image:functional-889240 testdata/build --alsologtostderr          │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:27 UTC │
	│ image   │ functional-889240 image ls                                                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:27 UTC │ 03 Oct 25 18:27 UTC │
	│ delete  │ -p functional-889240                                                                                            │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:30 UTC │ 03 Oct 25 18:30 UTC │
	│ start   │ ha-422561 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:30 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- rollout status deployment/busybox                                                          │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node add --alsologtostderr -v 5                                                                       │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node stop m02 --alsologtostderr -v 5                                                                  │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:30:55
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:30:55.351405   64909 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:30:55.351662   64909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:55.351671   64909 out.go:374] Setting ErrFile to fd 2...
	I1003 18:30:55.351675   64909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:55.351854   64909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:30:55.352339   64909 out.go:368] Setting JSON to false
	I1003 18:30:55.353203   64909 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4406,"bootTime":1759511849,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:30:55.353289   64909 start.go:140] virtualization: kvm guest
	I1003 18:30:55.355458   64909 out.go:179] * [ha-422561] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 18:30:55.356815   64909 notify.go:220] Checking for updates...
	I1003 18:30:55.356884   64909 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:30:55.358389   64909 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:30:55.359964   64909 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:30:55.361351   64909 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:30:55.362647   64909 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:30:55.363956   64909 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:30:55.365351   64909 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:30:55.387768   64909 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:30:55.387885   64909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:30:55.443407   64909 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:30:55.433728571 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:30:55.443516   64909 docker.go:318] overlay module found
	I1003 18:30:55.445440   64909 out.go:179] * Using the docker driver based on user configuration
	I1003 18:30:55.446777   64909 start.go:304] selected driver: docker
	I1003 18:30:55.446793   64909 start.go:924] validating driver "docker" against <nil>
	I1003 18:30:55.446808   64909 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:30:55.447403   64909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:30:55.498777   64909 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:30:55.489521827 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:30:55.498958   64909 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1003 18:30:55.499206   64909 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:30:55.501187   64909 out.go:179] * Using Docker driver with root privileges
	I1003 18:30:55.502312   64909 cni.go:84] Creating CNI manager for ""
	I1003 18:30:55.502386   64909 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1003 18:30:55.502397   64909 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 18:30:55.502459   64909 start.go:348] cluster config:
	{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1003 18:30:55.503779   64909 out.go:179] * Starting "ha-422561" primary control-plane node in "ha-422561" cluster
	I1003 18:30:55.504816   64909 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:30:55.506028   64909 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:30:55.507131   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:30:55.507167   64909 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 18:30:55.507169   64909 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:30:55.507175   64909 cache.go:58] Caching tarball of preloaded images
	I1003 18:30:55.507294   64909 preload.go:233] Found /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1003 18:30:55.507311   64909 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 18:30:55.507736   64909 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:30:55.507764   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json: {Name:mk1ece959bac74a473416f0dfc8af04a6136d7b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:30:55.527458   64909 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 18:30:55.527478   64909 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 18:30:55.527494   64909 cache.go:232] Successfully downloaded all kic artifacts
	I1003 18:30:55.527527   64909 start.go:360] acquireMachinesLock for ha-422561: {Name:mk32fd04a5d9b5f89831583bab7d7527f4d187a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:30:55.527631   64909 start.go:364] duration metric: took 81.336µs to acquireMachinesLock for "ha-422561"
	I1003 18:30:55.527657   64909 start.go:93] Provisioning new machine with config: &{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 18:30:55.527748   64909 start.go:125] createHost starting for "" (driver="docker")
	I1003 18:30:55.529663   64909 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1003 18:30:55.529898   64909 start.go:159] libmachine.API.Create for "ha-422561" (driver="docker")
	I1003 18:30:55.529933   64909 client.go:168] LocalClient.Create starting
	I1003 18:30:55.530028   64909 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem
	I1003 18:30:55.530072   64909 main.go:141] libmachine: Decoding PEM data...
	I1003 18:30:55.530097   64909 main.go:141] libmachine: Parsing certificate...
	I1003 18:30:55.530187   64909 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem
	I1003 18:30:55.530226   64909 main.go:141] libmachine: Decoding PEM data...
	I1003 18:30:55.530238   64909 main.go:141] libmachine: Parsing certificate...
	I1003 18:30:55.530612   64909 cli_runner.go:164] Run: docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 18:30:55.547068   64909 cli_runner.go:211] docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 18:30:55.547129   64909 network_create.go:284] running [docker network inspect ha-422561] to gather additional debugging logs...
	I1003 18:30:55.547146   64909 cli_runner.go:164] Run: docker network inspect ha-422561
	W1003 18:30:55.563141   64909 cli_runner.go:211] docker network inspect ha-422561 returned with exit code 1
	I1003 18:30:55.563167   64909 network_create.go:287] error running [docker network inspect ha-422561]: docker network inspect ha-422561: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-422561 not found
	I1003 18:30:55.563179   64909 network_create.go:289] output of [docker network inspect ha-422561]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-422561 not found
	
	** /stderr **
	I1003 18:30:55.563276   64909 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:30:55.579301   64909 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00157b3a0}
	I1003 18:30:55.579336   64909 network_create.go:124] attempt to create docker network ha-422561 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1003 18:30:55.579388   64909 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-422561 ha-422561
	I1003 18:30:55.634233   64909 network_create.go:108] docker network ha-422561 192.168.49.0/24 created
	I1003 18:30:55.634260   64909 kic.go:121] calculated static IP "192.168.49.2" for the "ha-422561" container
	I1003 18:30:55.634318   64909 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 18:30:55.649960   64909 cli_runner.go:164] Run: docker volume create ha-422561 --label name.minikube.sigs.k8s.io=ha-422561 --label created_by.minikube.sigs.k8s.io=true
	I1003 18:30:55.667186   64909 oci.go:103] Successfully created a docker volume ha-422561
	I1003 18:30:55.667250   64909 cli_runner.go:164] Run: docker run --rm --name ha-422561-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-422561 --entrypoint /usr/bin/test -v ha-422561:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1003 18:30:56.041615   64909 oci.go:107] Successfully prepared a docker volume ha-422561
	I1003 18:30:56.041648   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:30:56.041669   64909 kic.go:194] Starting extracting preloaded images to volume ...
	I1003 18:30:56.041727   64909 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-422561:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1003 18:31:00.326417   64909 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-422561:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.284654466s)
	I1003 18:31:00.326457   64909 kic.go:203] duration metric: took 4.284784967s to extract preloaded images to volume ...
	W1003 18:31:00.326567   64909 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1003 18:31:00.326610   64909 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1003 18:31:00.326657   64909 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1003 18:31:00.381592   64909 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-422561 --name ha-422561 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-422561 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-422561 --network ha-422561 --ip 192.168.49.2 --volume ha-422561:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1003 18:31:00.641348   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Running}}
	I1003 18:31:00.659876   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:00.678319   64909 cli_runner.go:164] Run: docker exec ha-422561 stat /var/lib/dpkg/alternatives/iptables
	I1003 18:31:00.728414   64909 oci.go:144] the created container "ha-422561" has a running status.
	I1003 18:31:00.728450   64909 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa...
	I1003 18:31:01.103610   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1003 18:31:01.103663   64909 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1003 18:31:01.128670   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:01.147200   64909 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1003 18:31:01.147218   64909 kic_runner.go:114] Args: [docker exec --privileged ha-422561 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1003 18:31:01.189023   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:01.207395   64909 machine.go:93] provisionDockerMachine start ...
	I1003 18:31:01.207497   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.226029   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.226282   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.226299   64909 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 18:31:01.372245   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:31:01.372275   64909 ubuntu.go:182] provisioning hostname "ha-422561"
	I1003 18:31:01.372335   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.390674   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.390889   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.390902   64909 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-422561 && echo "ha-422561" | sudo tee /etc/hostname
	I1003 18:31:01.544850   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:31:01.544932   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.563695   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.563966   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.564014   64909 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422561' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422561/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422561' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:31:01.708942   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:31:01.708971   64909 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8669/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8669/.minikube}
	I1003 18:31:01.709036   64909 ubuntu.go:190] setting up certificates
	I1003 18:31:01.709048   64909 provision.go:84] configureAuth start
	I1003 18:31:01.709101   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:01.727778   64909 provision.go:143] copyHostCerts
	I1003 18:31:01.727814   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:31:01.727849   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem, removing ...
	I1003 18:31:01.727858   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:31:01.727940   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem (1082 bytes)
	I1003 18:31:01.728054   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:31:01.728079   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem, removing ...
	I1003 18:31:01.728090   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:31:01.728137   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem (1123 bytes)
	I1003 18:31:01.728200   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:31:01.728225   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem, removing ...
	I1003 18:31:01.728234   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:31:01.728266   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem (1675 bytes)
	I1003 18:31:01.728336   64909 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem org=jenkins.ha-422561 san=[127.0.0.1 192.168.49.2 ha-422561 localhost minikube]
	I1003 18:31:01.864219   64909 provision.go:177] copyRemoteCerts
	I1003 18:31:01.864281   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:31:01.864317   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.882069   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:01.982800   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 18:31:01.982877   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1003 18:31:02.000887   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 18:31:02.000952   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 18:31:02.017591   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 18:31:02.017639   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 18:31:02.034172   64909 provision.go:87] duration metric: took 325.10989ms to configureAuth
	I1003 18:31:02.034202   64909 ubuntu.go:206] setting minikube options for container-runtime
	I1003 18:31:02.034393   64909 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:31:02.034508   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.052111   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:02.052326   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:02.052344   64909 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 18:31:02.295594   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 18:31:02.295629   64909 machine.go:96] duration metric: took 1.088207423s to provisionDockerMachine
	I1003 18:31:02.295640   64909 client.go:171] duration metric: took 6.765697238s to LocalClient.Create
	I1003 18:31:02.295660   64909 start.go:167] duration metric: took 6.765761646s to libmachine.API.Create "ha-422561"
	I1003 18:31:02.295669   64909 start.go:293] postStartSetup for "ha-422561" (driver="docker")
	I1003 18:31:02.295682   64909 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:31:02.295752   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:31:02.295789   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.312783   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.414720   64909 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:31:02.418127   64909 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:31:02.418149   64909 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 18:31:02.418159   64909 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/addons for local assets ...
	I1003 18:31:02.418213   64909 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/files for local assets ...
	I1003 18:31:02.418310   64909 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> 122122.pem in /etc/ssl/certs
	I1003 18:31:02.418326   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /etc/ssl/certs/122122.pem
	I1003 18:31:02.418453   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 18:31:02.425623   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:31:02.444405   64909 start.go:296] duration metric: took 148.722871ms for postStartSetup
	I1003 18:31:02.444748   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:02.462226   64909 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:31:02.462456   64909 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:31:02.462495   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.478737   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.575846   64909 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:31:02.580138   64909 start.go:128] duration metric: took 7.052376255s to createHost
	I1003 18:31:02.580160   64909 start.go:83] releasing machines lock for "ha-422561", held for 7.052515614s
	I1003 18:31:02.580230   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:02.596730   64909 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:31:02.596776   64909 ssh_runner.go:195] Run: cat /version.json
	I1003 18:31:02.596798   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.596817   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.613783   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.614183   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.764865   64909 ssh_runner.go:195] Run: systemctl --version
	I1003 18:31:02.771251   64909 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 18:31:02.803643   64909 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 18:31:02.807949   64909 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 18:31:02.808044   64909 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 18:31:02.833024   64909 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 18:31:02.833043   64909 start.go:495] detecting cgroup driver to use...
	I1003 18:31:02.833073   64909 detect.go:190] detected "systemd" cgroup driver on host os
	I1003 18:31:02.833108   64909 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 18:31:02.847613   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 18:31:02.858865   64909 docker.go:218] disabling cri-docker service (if available) ...
	I1003 18:31:02.858910   64909 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 18:31:02.874470   64909 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 18:31:02.890554   64909 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 18:31:02.970342   64909 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 18:31:03.055310   64909 docker.go:234] disabling docker service ...
	I1003 18:31:03.055369   64909 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 18:31:03.072668   64909 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 18:31:03.084308   64909 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 18:31:03.163959   64909 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 18:31:03.241930   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 18:31:03.253863   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:31:03.266905   64909 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 18:31:03.266971   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.276795   64909 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 18:31:03.276848   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.285157   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.293117   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.301070   64909 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:31:03.308489   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.316789   64909 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.329424   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.337651   64909 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:31:03.344839   64909 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:31:03.352026   64909 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:31:03.430894   64909 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 18:31:03.533915   64909 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 18:31:03.534002   64909 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 18:31:03.537783   64909 start.go:563] Will wait 60s for crictl version
	I1003 18:31:03.537838   64909 ssh_runner.go:195] Run: which crictl
	I1003 18:31:03.541393   64909 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 18:31:03.564883   64909 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 18:31:03.564963   64909 ssh_runner.go:195] Run: crio --version
	I1003 18:31:03.591363   64909 ssh_runner.go:195] Run: crio --version
	I1003 18:31:03.619425   64909 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 18:31:03.620466   64909 cli_runner.go:164] Run: docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:31:03.637151   64909 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 18:31:03.641184   64909 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:31:03.651292   64909 kubeadm.go:883] updating cluster {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 18:31:03.651379   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:31:03.651428   64909 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:31:03.680883   64909 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:31:03.680904   64909 crio.go:433] Images already preloaded, skipping extraction
	I1003 18:31:03.680955   64909 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:31:03.706829   64909 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:31:03.706859   64909 cache_images.go:85] Images are preloaded, skipping loading
	I1003 18:31:03.706866   64909 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1003 18:31:03.706953   64909 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422561 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 18:31:03.707032   64909 ssh_runner.go:195] Run: crio config
	I1003 18:31:03.751501   64909 cni.go:84] Creating CNI manager for ""
	I1003 18:31:03.751523   64909 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 18:31:03.751538   64909 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 18:31:03.751558   64909 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-422561 NodeName:ha-422561 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 18:31:03.751669   64909 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-422561"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:31:03.751691   64909 kube-vip.go:115] generating kube-vip config ...
	I1003 18:31:03.751728   64909 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1003 18:31:03.763009   64909 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:31:03.763125   64909 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1003 18:31:03.763181   64909 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 18:31:03.770585   64909 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:31:03.770633   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1003 18:31:03.778069   64909 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1003 18:31:03.790397   64909 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 18:31:03.805112   64909 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1003 18:31:03.817362   64909 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1003 18:31:03.830824   64909 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1003 18:31:03.834300   64909 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:31:03.843861   64909 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:31:03.921407   64909 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:31:03.944431   64909 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561 for IP: 192.168.49.2
	I1003 18:31:03.944451   64909 certs.go:195] generating shared ca certs ...
	I1003 18:31:03.944468   64909 certs.go:227] acquiring lock for ca certs: {Name:mk92d1e8e469cb44d9924ff8abf5ecf0a8ce4e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:03.944607   64909 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key
	I1003 18:31:03.944644   64909 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key
	I1003 18:31:03.944652   64909 certs.go:257] generating profile certs ...
	I1003 18:31:03.944708   64909 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key
	I1003 18:31:03.944722   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt with IP's: []
	I1003 18:31:04.171087   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt ...
	I1003 18:31:04.171118   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt: {Name:mked6cb0f731cbb630d2b187c4975015a458a284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.171291   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key ...
	I1003 18:31:04.171301   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key: {Name:mk0c9f0a0941d99f2af213cd316467f053532c99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.171391   64909 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905
	I1003 18:31:04.171406   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1003 18:31:04.383185   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 ...
	I1003 18:31:04.383218   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905: {Name:mkc24c55d4abb428b3559a93e6e301be2cab703a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.383381   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905 ...
	I1003 18:31:04.383394   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905: {Name:mk0576a73623089a3eecf4e34bbbd214545e2247 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.383486   64909 certs.go:382] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt
	I1003 18:31:04.383601   64909 certs.go:386] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905 -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key
	I1003 18:31:04.383674   64909 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key
	I1003 18:31:04.383689   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt with IP's: []
	I1003 18:31:04.628083   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt ...
	I1003 18:31:04.628112   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt: {Name:mkc19179c67a2559968759165df93d304eb42db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.628269   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key ...
	I1003 18:31:04.628279   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key: {Name:mka8b2392a3d721a70329b852837f3403643f948 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.628347   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 18:31:04.628364   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 18:31:04.628375   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 18:31:04.628384   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 18:31:04.628397   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 18:31:04.628410   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 18:31:04.628430   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 18:31:04.628442   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 18:31:04.628492   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem (1338 bytes)
	W1003 18:31:04.628525   64909 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212_empty.pem, impossibly tiny 0 bytes
	I1003 18:31:04.628535   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:31:04.628558   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:31:04.628580   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:31:04.628601   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem (1675 bytes)
	I1003 18:31:04.628637   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:31:04.628666   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem -> /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.628680   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.628692   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.629254   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:31:04.646879   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 18:31:04.663465   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:31:04.679837   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:31:04.695959   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 18:31:04.712689   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:31:04.729310   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:31:04.745587   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 18:31:04.761663   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem --> /usr/share/ca-certificates/12212.pem (1338 bytes)
	I1003 18:31:04.779546   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /usr/share/ca-certificates/122122.pem (1708 bytes)
	I1003 18:31:04.796119   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:31:04.813748   64909 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:31:04.826629   64909 ssh_runner.go:195] Run: openssl version
	I1003 18:31:04.832848   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122122.pem && ln -fs /usr/share/ca-certificates/122122.pem /etc/ssl/certs/122122.pem"
	I1003 18:31:04.840960   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.844465   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.844506   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.878276   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122122.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:31:04.886714   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:31:04.894672   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.898099   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.898154   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.931606   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:31:04.940357   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12212.pem && ln -fs /usr/share/ca-certificates/12212.pem /etc/ssl/certs/12212.pem"
	I1003 18:31:04.948454   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.952097   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.952148   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.985741   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12212.pem /etc/ssl/certs/51391683.0"
	I1003 18:31:04.994005   64909 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:31:04.997322   64909 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 18:31:04.997379   64909 kubeadm.go:400] StartCluster: {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:31:04.997476   64909 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:31:04.997539   64909 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:31:05.022530   64909 cri.go:89] found id: ""
	I1003 18:31:05.022595   64909 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:31:05.030329   64909 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 18:31:05.037782   64909 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:31:05.037841   64909 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:31:05.045127   64909 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:31:05.045142   64909 kubeadm.go:157] found existing configuration files:
	
	I1003 18:31:05.045174   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 18:31:05.052235   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:31:05.052286   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:31:05.059062   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 18:31:05.066034   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:31:05.066081   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:31:05.072912   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 18:31:05.079906   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:31:05.079966   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:31:05.086575   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 18:31:05.093500   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:31:05.093559   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:31:05.100246   64909 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:31:05.136174   64909 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:31:05.136254   64909 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:31:05.156320   64909 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:31:05.156407   64909 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 18:31:05.156462   64909 kubeadm.go:318] OS: Linux
	I1003 18:31:05.156539   64909 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:31:05.156610   64909 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:31:05.156705   64909 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:31:05.156790   64909 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:31:05.156865   64909 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:31:05.156939   64909 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:31:05.157035   64909 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:31:05.157127   64909 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 18:31:05.210250   64909 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:31:05.210408   64909 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:31:05.210566   64909 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:31:05.217643   64909 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:31:05.219725   64909 out.go:252]   - Generating certificates and keys ...
	I1003 18:31:05.219828   64909 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:31:05.219943   64909 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:31:05.398135   64909 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 18:31:05.511875   64909 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1003 18:31:05.863575   64909 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1003 18:31:06.044823   64909 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1003 18:31:06.083505   64909 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1003 18:31:06.083616   64909 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 18:31:06.181464   64909 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1003 18:31:06.181591   64909 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 18:31:06.345813   64909 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 18:31:06.565989   64909 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 18:31:06.759809   64909 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1003 18:31:06.759892   64909 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:31:06.883072   64909 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:31:07.211268   64909 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:31:07.403076   64909 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:31:07.687412   64909 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:31:08.052476   64909 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:31:08.052957   64909 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:31:08.054984   64909 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:31:08.056889   64909 out.go:252]   - Booting up control plane ...
	I1003 18:31:08.056984   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:31:08.057047   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:31:08.057102   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:31:08.069846   64909 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:31:08.069954   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:31:08.077490   64909 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:31:08.077826   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:31:08.077870   64909 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:31:08.170750   64909 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:31:08.170893   64909 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:31:09.172507   64909 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001794723s
	I1003 18:31:09.175233   64909 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:31:09.175335   64909 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1003 18:31:09.175418   64909 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:31:09.175496   64909 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:35:09.177158   64909 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001064557s
	I1003 18:35:09.177466   64909 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001283425s
	I1003 18:35:09.177673   64909 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00125879s
	I1003 18:35:09.177731   64909 kubeadm.go:318] 
	I1003 18:35:09.177887   64909 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 18:35:09.178114   64909 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:35:09.178320   64909 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 18:35:09.178580   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 18:35:09.178818   64909 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 18:35:09.179017   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 18:35:09.179033   64909 kubeadm.go:318] 
	I1003 18:35:09.182028   64909 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 18:35:09.182304   64909 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:35:09.182918   64909 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1003 18:35:09.183015   64909 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1003 18:35:09.183174   64909 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001794723s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001064557s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001283425s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00125879s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1003 18:35:09.183243   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1003 18:35:11.953646   64909 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.770379999s)
	I1003 18:35:11.953721   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:35:11.965876   64909 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:35:11.965928   64909 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:35:11.973363   64909 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:35:11.973382   64909 kubeadm.go:157] found existing configuration files:
	
	I1003 18:35:11.973419   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 18:35:11.980752   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:35:11.980806   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:35:11.987857   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 18:35:11.995081   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:35:11.995127   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:35:12.001778   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 18:35:12.009063   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:35:12.009126   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:35:12.015927   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 18:35:12.022875   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:35:12.022943   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:35:12.029549   64909 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:35:12.082477   64909 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 18:35:12.138594   64909 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:39:14.312592   64909 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	I1003 18:39:14.312818   64909 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1003 18:39:14.315914   64909 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:39:14.315992   64909 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:39:14.316115   64909 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:39:14.316166   64909 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 18:39:14.316250   64909 kubeadm.go:318] OS: Linux
	I1003 18:39:14.316328   64909 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:39:14.316401   64909 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:39:14.316475   64909 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:39:14.316553   64909 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:39:14.316624   64909 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:39:14.316701   64909 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:39:14.316751   64909 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:39:14.316825   64909 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 18:39:14.316936   64909 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:39:14.317123   64909 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:39:14.317262   64909 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:39:14.317314   64909 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:39:14.319872   64909 out.go:252]   - Generating certificates and keys ...
	I1003 18:39:14.319940   64909 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:39:14.320033   64909 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:39:14.320122   64909 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1003 18:39:14.320186   64909 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1003 18:39:14.320253   64909 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1003 18:39:14.320299   64909 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1003 18:39:14.320350   64909 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1003 18:39:14.320420   64909 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1003 18:39:14.320509   64909 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1003 18:39:14.320604   64909 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1003 18:39:14.320671   64909 kubeadm.go:318] [certs] Using the existing "sa" key
	I1003 18:39:14.320751   64909 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:39:14.320828   64909 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:39:14.320904   64909 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:39:14.321006   64909 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:39:14.321096   64909 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:39:14.321174   64909 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:39:14.321279   64909 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:39:14.321373   64909 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:39:14.322793   64909 out.go:252]   - Booting up control plane ...
	I1003 18:39:14.322884   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:39:14.323004   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:39:14.323072   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:39:14.323162   64909 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:39:14.323237   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:39:14.323335   64909 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:39:14.323415   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:39:14.323456   64909 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:39:14.323557   64909 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:39:14.323652   64909 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:39:14.323702   64909 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001540709s
	I1003 18:39:14.323792   64909 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:39:14.323860   64909 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1003 18:39:14.323946   64909 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:39:14.324043   64909 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:39:14.324124   64909 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	I1003 18:39:14.324186   64909 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	I1003 18:39:14.324248   64909 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	I1003 18:39:14.324258   64909 kubeadm.go:318] 
	I1003 18:39:14.324352   64909 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 18:39:14.324439   64909 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:39:14.324519   64909 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 18:39:14.324595   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 18:39:14.324687   64909 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 18:39:14.324773   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 18:39:14.324799   64909 kubeadm.go:318] 
	I1003 18:39:14.324836   64909 kubeadm.go:402] duration metric: took 8m9.327461574s to StartCluster
	I1003 18:39:14.324877   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:39:14.324935   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:39:14.352551   64909 cri.go:89] found id: ""
	I1003 18:39:14.352594   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.352608   64909 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:39:14.352617   64909 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:39:14.352684   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:39:14.376604   64909 cri.go:89] found id: ""
	I1003 18:39:14.376629   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.376638   64909 logs.go:284] No container was found matching "etcd"
	I1003 18:39:14.376643   64909 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:39:14.376750   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:39:14.401480   64909 cri.go:89] found id: ""
	I1003 18:39:14.401504   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.401512   64909 logs.go:284] No container was found matching "coredns"
	I1003 18:39:14.401517   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:39:14.401582   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:39:14.426822   64909 cri.go:89] found id: ""
	I1003 18:39:14.426858   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.426871   64909 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:39:14.426879   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:39:14.426946   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:39:14.451679   64909 cri.go:89] found id: ""
	I1003 18:39:14.451710   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.451722   64909 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:39:14.451730   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:39:14.451787   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:39:14.477253   64909 cri.go:89] found id: ""
	I1003 18:39:14.477275   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.477282   64909 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:39:14.477288   64909 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:39:14.477332   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:39:14.501586   64909 cri.go:89] found id: ""
	I1003 18:39:14.501613   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.501621   64909 logs.go:284] No container was found matching "kindnet"
	I1003 18:39:14.501632   64909 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:39:14.501643   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:39:14.561285   64909 logs.go:123] Gathering logs for container status ...
	I1003 18:39:14.561318   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:39:14.589589   64909 logs.go:123] Gathering logs for kubelet ...
	I1003 18:39:14.589614   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:39:14.656775   64909 logs.go:123] Gathering logs for dmesg ...
	I1003 18:39:14.656809   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:39:14.668000   64909 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:39:14.668023   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:39:14.725446   64909 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:39:14.718419    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.718941    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720510    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720909    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.722416    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:39:14.718419    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.718941    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720510    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720909    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.722416    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1003 18:39:14.725478   64909 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	W1003 18:39:14.725530   64909 out.go:285] * 
	W1003 18:39:14.725612   64909 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:39:14.725629   64909 out.go:285] * 
	W1003 18:39:14.727399   64909 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:39:14.731087   64909 out.go:203] 
	W1003 18:39:14.732560   64909 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:39:14.732585   64909 out.go:285] * 
	I1003 18:39:14.734183   64909 out.go:203] 
	
	
	==> CRI-O <==
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.920098966Z" level=info msg="createCtr: removing container 60eac4f05bb70cc097a023480fc9d2f45ed0628f63763a71867879f1fd5fa153" id=8ee50b88-f594-4d65-81a3-5ff4b08ba0ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.920129084Z" level=info msg="createCtr: deleting container 60eac4f05bb70cc097a023480fc9d2f45ed0628f63763a71867879f1fd5fa153 from storage" id=8ee50b88-f594-4d65-81a3-5ff4b08ba0ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:46 ha-422561 crio[781]: time="2025-10-03T18:40:46.922274937Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-422561_kube-system_6803106e6cb30e1b9b282ce29772fddf_0" id=8ee50b88-f594-4d65-81a3-5ff4b08ba0ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.895966159Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=5d5ebf70-cac3-422d-8424-70692dea829d name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.896076791Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=49f23445-fb1d-4650-aca8-7186c3d76e4e name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.89680709Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=1ea2be83-eafe-478b-86f5-ff2b9b2e9177 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.896872043Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=c4d1f694-5260-4074-8a93-156d0e025c5f name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.897665559Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-422561/kube-apiserver" id=6406fac4-1b44-4912-9cfd-8fddc1257c83 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.897818486Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-422561/kube-controller-manager" id=a270eb16-d817-4f6a-a2b8-ec941dc0bda5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.897895229Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.898053794Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.903279482Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.903713949Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.905147304Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.906535458Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.925936651Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=6406fac4-1b44-4912-9cfd-8fddc1257c83 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.927378667Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=a270eb16-d817-4f6a-a2b8-ec941dc0bda5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.927499752Z" level=info msg="createCtr: deleting container ID ea64bda413ffe4bf43dae710ca0af55cb5bf7537c29d07d52d6f7dc57d31729b from idIndex" id=6406fac4-1b44-4912-9cfd-8fddc1257c83 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.927528924Z" level=info msg="createCtr: removing container ea64bda413ffe4bf43dae710ca0af55cb5bf7537c29d07d52d6f7dc57d31729b" id=6406fac4-1b44-4912-9cfd-8fddc1257c83 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.927557417Z" level=info msg="createCtr: deleting container ea64bda413ffe4bf43dae710ca0af55cb5bf7537c29d07d52d6f7dc57d31729b from storage" id=6406fac4-1b44-4912-9cfd-8fddc1257c83 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.928799426Z" level=info msg="createCtr: deleting container ID e2f4b8a4b4eb69392834fbdf154cc4c03d0594e25846b955a947d26192dbeeb2 from idIndex" id=a270eb16-d817-4f6a-a2b8-ec941dc0bda5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.92883909Z" level=info msg="createCtr: removing container e2f4b8a4b4eb69392834fbdf154cc4c03d0594e25846b955a947d26192dbeeb2" id=a270eb16-d817-4f6a-a2b8-ec941dc0bda5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.92887498Z" level=info msg="createCtr: deleting container e2f4b8a4b4eb69392834fbdf154cc4c03d0594e25846b955a947d26192dbeeb2 from storage" id=a270eb16-d817-4f6a-a2b8-ec941dc0bda5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.930691085Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-422561_kube-system_6ecf19dd95945fcfeaff027fad95c1ee_0" id=6406fac4-1b44-4912-9cfd-8fddc1257c83 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.931071975Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-422561_kube-system_e643a03771f1e72f527532eff2c66a9c_0" id=a270eb16-d817-4f6a-a2b8-ec941dc0bda5 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:40:59.803130    4055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:40:59.803627    4055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:40:59.805190    4055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:40:59.805609    4055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:40:59.807099    4055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 3 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001870] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.374530] i8042: Warning: Keylock active
	[  +0.010846] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003424] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000658] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000637] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000691] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.479345] block sda: the capability attribute has been deprecated.
	[  +0.086934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025583] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.992810] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:40:59 up  1:23,  0 user,  load average: 0.67, 0.19, 0.11
	Linux ha-422561 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 03 18:40:48 ha-422561 kubelet[1961]: I1003 18:40:48.695745    1961 kubelet_node_status.go:75] "Attempting to register node" node="ha-422561"
	Oct 03 18:40:48 ha-422561 kubelet[1961]: E1003 18:40:48.696172    1961 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-422561"
	Oct 03 18:40:49 ha-422561 kubelet[1961]: E1003 18:40:49.418760    1961 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.349401    1961 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-422561.186b0ef272ca351c  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-422561,UID:ha-422561,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-422561 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-422561,},FirstTimestamp:2025-10-03 18:35:13.889039644 +0000 UTC m=+0.583846472,LastTimestamp:2025-10-03 18:35:13.889039644 +0000 UTC m=+0.583846472,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-422561,}"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.895596    1961 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.895738    1961 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.916294    1961 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-422561\" not found"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.930954    1961 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:40:53 ha-422561 kubelet[1961]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:40:53 ha-422561 kubelet[1961]:  > podSandboxID="a859763ae69d997e72724d21d35d0ae86fcde7bd11468ef604f5a6d23f35b0f0"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.931068    1961 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:40:53 ha-422561 kubelet[1961]:         container kube-apiserver start failed in pod kube-apiserver-ha-422561_kube-system(6ecf19dd95945fcfeaff027fad95c1ee): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:40:53 ha-422561 kubelet[1961]:  > logger="UnhandledError"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.931108    1961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-422561" podUID="6ecf19dd95945fcfeaff027fad95c1ee"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.931305    1961 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:40:53 ha-422561 kubelet[1961]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:40:53 ha-422561 kubelet[1961]:  > podSandboxID="2bca45b92f4f55f540f80dd9d8d3d282362f7f0ecce2ac4786e27a3b4a9cfd4d"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.931391    1961 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:40:53 ha-422561 kubelet[1961]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-422561_kube-system(e643a03771f1e72f527532eff2c66a9c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:40:53 ha-422561 kubelet[1961]:  > logger="UnhandledError"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.932375    1961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-422561" podUID="e643a03771f1e72f527532eff2c66a9c"
	Oct 03 18:40:55 ha-422561 kubelet[1961]: E1003 18:40:55.536511    1961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-422561?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 03 18:40:55 ha-422561 kubelet[1961]: I1003 18:40:55.697332    1961 kubelet_node_status.go:75] "Attempting to register node" node="ha-422561"
	Oct 03 18:40:55 ha-422561 kubelet[1961]: E1003 18:40:55.697723    1961 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-422561"
	Oct 03 18:40:58 ha-422561 kubelet[1961]: E1003 18:40:58.954263    1961 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561: exit status 6 (296.356755ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:41:00.173323   74904 status.go:458] kubeconfig endpoint: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-422561" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (1.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-422561" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-422561\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-422561\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-422561\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":nul
l,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list
--output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-422561
helpers_test.go:243: (dbg) docker inspect ha-422561:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	        "Created": "2025-10-03T18:31:00.396132938Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 65481,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T18:31:00.428325646Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hostname",
	        "HostsPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hosts",
	        "LogPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512-json.log",
	        "Name": "/ha-422561",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-422561:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-422561",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	                "LowerDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94-init/diff:/var/lib/docker/overlay2/6a517a7375440eba803d7b83fe1e0821915758396dd4d8556ab64fff322a60c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-422561",
	                "Source": "/var/lib/docker/volumes/ha-422561/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-422561",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-422561",
	                "name.minikube.sigs.k8s.io": "ha-422561",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3084976d568ce061948ebe671f279a80502b1d28417f2be7c2497961eac2a5aa",
	            "SandboxKey": "/var/run/docker/netns/3084976d568c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-422561": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:e4:3c:eb:d3:38",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de6aa7ca29f453c0d15cb280abde7ee215f554c89e78e3db8a0f7590468114b5",
	                    "EndpointID": "1b961733d045b77a64efb8afa6caa273125f56ec888f823b790f5454f23ca3b7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-422561",
	                        "eef8fc426b2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561: exit status 6 (288.513247ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:41:00.791681   75159 status.go:458] kubeconfig endpoint: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-889240 image ls --format json --alsologtostderr                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image   │ functional-889240 image ls --format table --alsologtostderr                                                     │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image   │ functional-889240 image build -t localhost/my-image:functional-889240 testdata/build --alsologtostderr          │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:27 UTC │
	│ image   │ functional-889240 image ls                                                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:27 UTC │ 03 Oct 25 18:27 UTC │
	│ delete  │ -p functional-889240                                                                                            │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:30 UTC │ 03 Oct 25 18:30 UTC │
	│ start   │ ha-422561 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:30 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- rollout status deployment/busybox                                                          │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node add --alsologtostderr -v 5                                                                       │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node stop m02 --alsologtostderr -v 5                                                                  │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:30:55
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:30:55.351405   64909 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:30:55.351662   64909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:55.351671   64909 out.go:374] Setting ErrFile to fd 2...
	I1003 18:30:55.351675   64909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:55.351854   64909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:30:55.352339   64909 out.go:368] Setting JSON to false
	I1003 18:30:55.353203   64909 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4406,"bootTime":1759511849,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:30:55.353289   64909 start.go:140] virtualization: kvm guest
	I1003 18:30:55.355458   64909 out.go:179] * [ha-422561] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 18:30:55.356815   64909 notify.go:220] Checking for updates...
	I1003 18:30:55.356884   64909 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:30:55.358389   64909 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:30:55.359964   64909 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:30:55.361351   64909 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:30:55.362647   64909 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:30:55.363956   64909 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:30:55.365351   64909 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:30:55.387768   64909 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:30:55.387885   64909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:30:55.443407   64909 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:30:55.433728571 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:30:55.443516   64909 docker.go:318] overlay module found
	I1003 18:30:55.445440   64909 out.go:179] * Using the docker driver based on user configuration
	I1003 18:30:55.446777   64909 start.go:304] selected driver: docker
	I1003 18:30:55.446793   64909 start.go:924] validating driver "docker" against <nil>
	I1003 18:30:55.446808   64909 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:30:55.447403   64909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:30:55.498777   64909 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:30:55.489521827 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:30:55.498958   64909 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1003 18:30:55.499206   64909 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:30:55.501187   64909 out.go:179] * Using Docker driver with root privileges
	I1003 18:30:55.502312   64909 cni.go:84] Creating CNI manager for ""
	I1003 18:30:55.502386   64909 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1003 18:30:55.502397   64909 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 18:30:55.502459   64909 start.go:348] cluster config:
	{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1003 18:30:55.503779   64909 out.go:179] * Starting "ha-422561" primary control-plane node in "ha-422561" cluster
	I1003 18:30:55.504816   64909 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:30:55.506028   64909 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:30:55.507131   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:30:55.507167   64909 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 18:30:55.507169   64909 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:30:55.507175   64909 cache.go:58] Caching tarball of preloaded images
	I1003 18:30:55.507294   64909 preload.go:233] Found /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1003 18:30:55.507311   64909 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 18:30:55.507736   64909 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:30:55.507764   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json: {Name:mk1ece959bac74a473416f0dfc8af04a6136d7b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:30:55.527458   64909 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 18:30:55.527478   64909 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 18:30:55.527494   64909 cache.go:232] Successfully downloaded all kic artifacts
	I1003 18:30:55.527527   64909 start.go:360] acquireMachinesLock for ha-422561: {Name:mk32fd04a5d9b5f89831583bab7d7527f4d187a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:30:55.527631   64909 start.go:364] duration metric: took 81.336µs to acquireMachinesLock for "ha-422561"
	I1003 18:30:55.527657   64909 start.go:93] Provisioning new machine with config: &{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 18:30:55.527748   64909 start.go:125] createHost starting for "" (driver="docker")
	I1003 18:30:55.529663   64909 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1003 18:30:55.529898   64909 start.go:159] libmachine.API.Create for "ha-422561" (driver="docker")
	I1003 18:30:55.529933   64909 client.go:168] LocalClient.Create starting
	I1003 18:30:55.530028   64909 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem
	I1003 18:30:55.530072   64909 main.go:141] libmachine: Decoding PEM data...
	I1003 18:30:55.530097   64909 main.go:141] libmachine: Parsing certificate...
	I1003 18:30:55.530187   64909 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem
	I1003 18:30:55.530226   64909 main.go:141] libmachine: Decoding PEM data...
	I1003 18:30:55.530238   64909 main.go:141] libmachine: Parsing certificate...
	I1003 18:30:55.530612   64909 cli_runner.go:164] Run: docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 18:30:55.547068   64909 cli_runner.go:211] docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 18:30:55.547129   64909 network_create.go:284] running [docker network inspect ha-422561] to gather additional debugging logs...
	I1003 18:30:55.547146   64909 cli_runner.go:164] Run: docker network inspect ha-422561
	W1003 18:30:55.563141   64909 cli_runner.go:211] docker network inspect ha-422561 returned with exit code 1
	I1003 18:30:55.563167   64909 network_create.go:287] error running [docker network inspect ha-422561]: docker network inspect ha-422561: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-422561 not found
	I1003 18:30:55.563179   64909 network_create.go:289] output of [docker network inspect ha-422561]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-422561 not found
	
	** /stderr **
	I1003 18:30:55.563276   64909 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:30:55.579301   64909 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00157b3a0}
	I1003 18:30:55.579336   64909 network_create.go:124] attempt to create docker network ha-422561 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1003 18:30:55.579388   64909 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-422561 ha-422561
	I1003 18:30:55.634233   64909 network_create.go:108] docker network ha-422561 192.168.49.0/24 created
	I1003 18:30:55.634260   64909 kic.go:121] calculated static IP "192.168.49.2" for the "ha-422561" container
	I1003 18:30:55.634318   64909 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 18:30:55.649960   64909 cli_runner.go:164] Run: docker volume create ha-422561 --label name.minikube.sigs.k8s.io=ha-422561 --label created_by.minikube.sigs.k8s.io=true
	I1003 18:30:55.667186   64909 oci.go:103] Successfully created a docker volume ha-422561
	I1003 18:30:55.667250   64909 cli_runner.go:164] Run: docker run --rm --name ha-422561-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-422561 --entrypoint /usr/bin/test -v ha-422561:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1003 18:30:56.041615   64909 oci.go:107] Successfully prepared a docker volume ha-422561
	I1003 18:30:56.041648   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:30:56.041669   64909 kic.go:194] Starting extracting preloaded images to volume ...
	I1003 18:30:56.041727   64909 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-422561:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1003 18:31:00.326417   64909 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-422561:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.284654466s)
	I1003 18:31:00.326457   64909 kic.go:203] duration metric: took 4.284784967s to extract preloaded images to volume ...
	W1003 18:31:00.326567   64909 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1003 18:31:00.326610   64909 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1003 18:31:00.326657   64909 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1003 18:31:00.381592   64909 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-422561 --name ha-422561 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-422561 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-422561 --network ha-422561 --ip 192.168.49.2 --volume ha-422561:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1003 18:31:00.641348   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Running}}
	I1003 18:31:00.659876   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:00.678319   64909 cli_runner.go:164] Run: docker exec ha-422561 stat /var/lib/dpkg/alternatives/iptables
	I1003 18:31:00.728414   64909 oci.go:144] the created container "ha-422561" has a running status.
	I1003 18:31:00.728450   64909 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa...
	I1003 18:31:01.103610   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1003 18:31:01.103663   64909 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1003 18:31:01.128670   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:01.147200   64909 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1003 18:31:01.147218   64909 kic_runner.go:114] Args: [docker exec --privileged ha-422561 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1003 18:31:01.189023   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:01.207395   64909 machine.go:93] provisionDockerMachine start ...
	I1003 18:31:01.207497   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.226029   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.226282   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.226299   64909 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 18:31:01.372245   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:31:01.372275   64909 ubuntu.go:182] provisioning hostname "ha-422561"
	I1003 18:31:01.372335   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.390674   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.390889   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.390902   64909 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-422561 && echo "ha-422561" | sudo tee /etc/hostname
	I1003 18:31:01.544850   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:31:01.544932   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.563695   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.563966   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.564014   64909 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422561' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422561/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422561' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:31:01.708942   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:31:01.708971   64909 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8669/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8669/.minikube}
	I1003 18:31:01.709036   64909 ubuntu.go:190] setting up certificates
	I1003 18:31:01.709048   64909 provision.go:84] configureAuth start
	I1003 18:31:01.709101   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:01.727778   64909 provision.go:143] copyHostCerts
	I1003 18:31:01.727814   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:31:01.727849   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem, removing ...
	I1003 18:31:01.727858   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:31:01.727940   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem (1082 bytes)
	I1003 18:31:01.728054   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:31:01.728079   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem, removing ...
	I1003 18:31:01.728090   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:31:01.728137   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem (1123 bytes)
	I1003 18:31:01.728200   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:31:01.728225   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem, removing ...
	I1003 18:31:01.728234   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:31:01.728266   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem (1675 bytes)
	I1003 18:31:01.728336   64909 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem org=jenkins.ha-422561 san=[127.0.0.1 192.168.49.2 ha-422561 localhost minikube]
	I1003 18:31:01.864219   64909 provision.go:177] copyRemoteCerts
	I1003 18:31:01.864281   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:31:01.864317   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.882069   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:01.982800   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 18:31:01.982877   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1003 18:31:02.000887   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 18:31:02.000952   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 18:31:02.017591   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 18:31:02.017639   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 18:31:02.034172   64909 provision.go:87] duration metric: took 325.10989ms to configureAuth
	I1003 18:31:02.034202   64909 ubuntu.go:206] setting minikube options for container-runtime
	I1003 18:31:02.034393   64909 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:31:02.034508   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.052111   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:02.052326   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:02.052344   64909 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 18:31:02.295594   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 18:31:02.295629   64909 machine.go:96] duration metric: took 1.088207423s to provisionDockerMachine
	I1003 18:31:02.295640   64909 client.go:171] duration metric: took 6.765697238s to LocalClient.Create
	I1003 18:31:02.295660   64909 start.go:167] duration metric: took 6.765761646s to libmachine.API.Create "ha-422561"
	I1003 18:31:02.295669   64909 start.go:293] postStartSetup for "ha-422561" (driver="docker")
	I1003 18:31:02.295682   64909 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:31:02.295752   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:31:02.295789   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.312783   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.414720   64909 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:31:02.418127   64909 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:31:02.418149   64909 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 18:31:02.418159   64909 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/addons for local assets ...
	I1003 18:31:02.418213   64909 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/files for local assets ...
	I1003 18:31:02.418310   64909 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> 122122.pem in /etc/ssl/certs
	I1003 18:31:02.418326   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /etc/ssl/certs/122122.pem
	I1003 18:31:02.418453   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 18:31:02.425623   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:31:02.444405   64909 start.go:296] duration metric: took 148.722871ms for postStartSetup
	I1003 18:31:02.444748   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:02.462226   64909 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:31:02.462456   64909 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:31:02.462495   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.478737   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.575846   64909 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:31:02.580138   64909 start.go:128] duration metric: took 7.052376255s to createHost
	I1003 18:31:02.580160   64909 start.go:83] releasing machines lock for "ha-422561", held for 7.052515614s
	I1003 18:31:02.580230   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:02.596730   64909 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:31:02.596776   64909 ssh_runner.go:195] Run: cat /version.json
	I1003 18:31:02.596798   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.596817   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.613783   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.614183   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.764865   64909 ssh_runner.go:195] Run: systemctl --version
	I1003 18:31:02.771251   64909 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 18:31:02.803643   64909 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 18:31:02.807949   64909 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 18:31:02.808044   64909 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 18:31:02.833024   64909 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 18:31:02.833043   64909 start.go:495] detecting cgroup driver to use...
	I1003 18:31:02.833073   64909 detect.go:190] detected "systemd" cgroup driver on host os
	I1003 18:31:02.833108   64909 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 18:31:02.847613   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 18:31:02.858865   64909 docker.go:218] disabling cri-docker service (if available) ...
	I1003 18:31:02.858910   64909 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 18:31:02.874470   64909 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 18:31:02.890554   64909 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 18:31:02.970342   64909 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 18:31:03.055310   64909 docker.go:234] disabling docker service ...
	I1003 18:31:03.055369   64909 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 18:31:03.072668   64909 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 18:31:03.084308   64909 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 18:31:03.163959   64909 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 18:31:03.241930   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 18:31:03.253863   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:31:03.266905   64909 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 18:31:03.266971   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.276795   64909 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 18:31:03.276848   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.285157   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.293117   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.301070   64909 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:31:03.308489   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.316789   64909 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.329424   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.337651   64909 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:31:03.344839   64909 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:31:03.352026   64909 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:31:03.430894   64909 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 18:31:03.533915   64909 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 18:31:03.534002   64909 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 18:31:03.537783   64909 start.go:563] Will wait 60s for crictl version
	I1003 18:31:03.537838   64909 ssh_runner.go:195] Run: which crictl
	I1003 18:31:03.541393   64909 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 18:31:03.564883   64909 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 18:31:03.564963   64909 ssh_runner.go:195] Run: crio --version
	I1003 18:31:03.591363   64909 ssh_runner.go:195] Run: crio --version
	I1003 18:31:03.619425   64909 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 18:31:03.620466   64909 cli_runner.go:164] Run: docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:31:03.637151   64909 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 18:31:03.641184   64909 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:31:03.651292   64909 kubeadm.go:883] updating cluster {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 18:31:03.651379   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:31:03.651428   64909 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:31:03.680883   64909 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:31:03.680904   64909 crio.go:433] Images already preloaded, skipping extraction
	I1003 18:31:03.680955   64909 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:31:03.706829   64909 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:31:03.706859   64909 cache_images.go:85] Images are preloaded, skipping loading
	I1003 18:31:03.706866   64909 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1003 18:31:03.706953   64909 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422561 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 18:31:03.707032   64909 ssh_runner.go:195] Run: crio config
	I1003 18:31:03.751501   64909 cni.go:84] Creating CNI manager for ""
	I1003 18:31:03.751523   64909 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 18:31:03.751538   64909 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 18:31:03.751558   64909 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-422561 NodeName:ha-422561 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 18:31:03.751669   64909 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-422561"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:31:03.751691   64909 kube-vip.go:115] generating kube-vip config ...
	I1003 18:31:03.751728   64909 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1003 18:31:03.763009   64909 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:31:03.763125   64909 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1003 18:31:03.763181   64909 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 18:31:03.770585   64909 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:31:03.770633   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1003 18:31:03.778069   64909 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1003 18:31:03.790397   64909 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 18:31:03.805112   64909 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1003 18:31:03.817362   64909 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1003 18:31:03.830824   64909 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1003 18:31:03.834300   64909 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:31:03.843861   64909 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:31:03.921407   64909 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:31:03.944431   64909 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561 for IP: 192.168.49.2
	I1003 18:31:03.944451   64909 certs.go:195] generating shared ca certs ...
	I1003 18:31:03.944468   64909 certs.go:227] acquiring lock for ca certs: {Name:mk92d1e8e469cb44d9924ff8abf5ecf0a8ce4e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:03.944607   64909 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key
	I1003 18:31:03.944644   64909 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key
	I1003 18:31:03.944652   64909 certs.go:257] generating profile certs ...
	I1003 18:31:03.944708   64909 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key
	I1003 18:31:03.944722   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt with IP's: []
	I1003 18:31:04.171087   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt ...
	I1003 18:31:04.171118   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt: {Name:mked6cb0f731cbb630d2b187c4975015a458a284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.171291   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key ...
	I1003 18:31:04.171301   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key: {Name:mk0c9f0a0941d99f2af213cd316467f053532c99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.171391   64909 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905
	I1003 18:31:04.171406   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1003 18:31:04.383185   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 ...
	I1003 18:31:04.383218   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905: {Name:mkc24c55d4abb428b3559a93e6e301be2cab703a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.383381   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905 ...
	I1003 18:31:04.383394   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905: {Name:mk0576a73623089a3eecf4e34bbbd214545e2247 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.383486   64909 certs.go:382] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt
	I1003 18:31:04.383601   64909 certs.go:386] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905 -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key
	I1003 18:31:04.383674   64909 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key
	I1003 18:31:04.383689   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt with IP's: []
	I1003 18:31:04.628083   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt ...
	I1003 18:31:04.628112   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt: {Name:mkc19179c67a2559968759165df93d304eb42db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.628269   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key ...
	I1003 18:31:04.628279   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key: {Name:mka8b2392a3d721a70329b852837f3403643f948 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.628347   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 18:31:04.628364   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 18:31:04.628375   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 18:31:04.628384   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 18:31:04.628397   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 18:31:04.628410   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 18:31:04.628430   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 18:31:04.628442   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 18:31:04.628492   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem (1338 bytes)
	W1003 18:31:04.628525   64909 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212_empty.pem, impossibly tiny 0 bytes
	I1003 18:31:04.628535   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:31:04.628558   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:31:04.628580   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:31:04.628601   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem (1675 bytes)
	I1003 18:31:04.628637   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:31:04.628666   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem -> /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.628680   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.628692   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.629254   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:31:04.646879   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 18:31:04.663465   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:31:04.679837   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:31:04.695959   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 18:31:04.712689   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:31:04.729310   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:31:04.745587   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 18:31:04.761663   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem --> /usr/share/ca-certificates/12212.pem (1338 bytes)
	I1003 18:31:04.779546   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /usr/share/ca-certificates/122122.pem (1708 bytes)
	I1003 18:31:04.796119   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:31:04.813748   64909 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:31:04.826629   64909 ssh_runner.go:195] Run: openssl version
	I1003 18:31:04.832848   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122122.pem && ln -fs /usr/share/ca-certificates/122122.pem /etc/ssl/certs/122122.pem"
	I1003 18:31:04.840960   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.844465   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.844506   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.878276   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122122.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:31:04.886714   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:31:04.894672   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.898099   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.898154   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.931606   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:31:04.940357   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12212.pem && ln -fs /usr/share/ca-certificates/12212.pem /etc/ssl/certs/12212.pem"
	I1003 18:31:04.948454   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.952097   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.952148   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.985741   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12212.pem /etc/ssl/certs/51391683.0"
	I1003 18:31:04.994005   64909 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:31:04.997322   64909 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 18:31:04.997379   64909 kubeadm.go:400] StartCluster: {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:31:04.997476   64909 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:31:04.997539   64909 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:31:05.022530   64909 cri.go:89] found id: ""
	I1003 18:31:05.022595   64909 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:31:05.030329   64909 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 18:31:05.037782   64909 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:31:05.037841   64909 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:31:05.045127   64909 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:31:05.045142   64909 kubeadm.go:157] found existing configuration files:
	
	I1003 18:31:05.045174   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 18:31:05.052235   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:31:05.052286   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:31:05.059062   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 18:31:05.066034   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:31:05.066081   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:31:05.072912   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 18:31:05.079906   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:31:05.079966   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:31:05.086575   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 18:31:05.093500   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:31:05.093559   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:31:05.100246   64909 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:31:05.136174   64909 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:31:05.136254   64909 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:31:05.156320   64909 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:31:05.156407   64909 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 18:31:05.156462   64909 kubeadm.go:318] OS: Linux
	I1003 18:31:05.156539   64909 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:31:05.156610   64909 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:31:05.156705   64909 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:31:05.156790   64909 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:31:05.156865   64909 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:31:05.156939   64909 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:31:05.157035   64909 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:31:05.157127   64909 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 18:31:05.210250   64909 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:31:05.210408   64909 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:31:05.210566   64909 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:31:05.217643   64909 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:31:05.219725   64909 out.go:252]   - Generating certificates and keys ...
	I1003 18:31:05.219828   64909 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:31:05.219943   64909 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:31:05.398135   64909 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 18:31:05.511875   64909 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1003 18:31:05.863575   64909 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1003 18:31:06.044823   64909 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1003 18:31:06.083505   64909 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1003 18:31:06.083616   64909 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 18:31:06.181464   64909 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1003 18:31:06.181591   64909 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 18:31:06.345813   64909 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 18:31:06.565989   64909 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 18:31:06.759809   64909 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1003 18:31:06.759892   64909 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:31:06.883072   64909 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:31:07.211268   64909 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:31:07.403076   64909 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:31:07.687412   64909 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:31:08.052476   64909 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:31:08.052957   64909 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:31:08.054984   64909 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:31:08.056889   64909 out.go:252]   - Booting up control plane ...
	I1003 18:31:08.056984   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:31:08.057047   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:31:08.057102   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:31:08.069846   64909 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:31:08.069954   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:31:08.077490   64909 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:31:08.077826   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:31:08.077870   64909 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:31:08.170750   64909 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:31:08.170893   64909 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:31:09.172507   64909 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001794723s
	I1003 18:31:09.175233   64909 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:31:09.175335   64909 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1003 18:31:09.175418   64909 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:31:09.175496   64909 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:35:09.177158   64909 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001064557s
	I1003 18:35:09.177466   64909 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001283425s
	I1003 18:35:09.177673   64909 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00125879s
	I1003 18:35:09.177731   64909 kubeadm.go:318] 
	I1003 18:35:09.177887   64909 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 18:35:09.178114   64909 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:35:09.178320   64909 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 18:35:09.178580   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 18:35:09.178818   64909 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 18:35:09.179017   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 18:35:09.179033   64909 kubeadm.go:318] 
	I1003 18:35:09.182028   64909 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 18:35:09.182304   64909 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:35:09.182918   64909 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1003 18:35:09.183015   64909 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1003 18:35:09.183174   64909 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001794723s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001064557s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001283425s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00125879s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1003 18:35:09.183243   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1003 18:35:11.953646   64909 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.770379999s)
	I1003 18:35:11.953721   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:35:11.965876   64909 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:35:11.965928   64909 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:35:11.973363   64909 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:35:11.973382   64909 kubeadm.go:157] found existing configuration files:
	
	I1003 18:35:11.973419   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 18:35:11.980752   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:35:11.980806   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:35:11.987857   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 18:35:11.995081   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:35:11.995127   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:35:12.001778   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 18:35:12.009063   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:35:12.009126   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:35:12.015927   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 18:35:12.022875   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:35:12.022943   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:35:12.029549   64909 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:35:12.082477   64909 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 18:35:12.138594   64909 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:39:14.312592   64909 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	I1003 18:39:14.312818   64909 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1003 18:39:14.315914   64909 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:39:14.315992   64909 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:39:14.316115   64909 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:39:14.316166   64909 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 18:39:14.316250   64909 kubeadm.go:318] OS: Linux
	I1003 18:39:14.316328   64909 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:39:14.316401   64909 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:39:14.316475   64909 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:39:14.316553   64909 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:39:14.316624   64909 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:39:14.316701   64909 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:39:14.316751   64909 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:39:14.316825   64909 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 18:39:14.316936   64909 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:39:14.317123   64909 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:39:14.317262   64909 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:39:14.317314   64909 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:39:14.319872   64909 out.go:252]   - Generating certificates and keys ...
	I1003 18:39:14.319940   64909 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:39:14.320033   64909 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:39:14.320122   64909 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1003 18:39:14.320186   64909 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1003 18:39:14.320253   64909 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1003 18:39:14.320299   64909 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1003 18:39:14.320350   64909 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1003 18:39:14.320420   64909 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1003 18:39:14.320509   64909 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1003 18:39:14.320604   64909 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1003 18:39:14.320671   64909 kubeadm.go:318] [certs] Using the existing "sa" key
	I1003 18:39:14.320751   64909 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:39:14.320828   64909 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:39:14.320904   64909 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:39:14.321006   64909 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:39:14.321096   64909 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:39:14.321174   64909 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:39:14.321279   64909 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:39:14.321373   64909 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:39:14.322793   64909 out.go:252]   - Booting up control plane ...
	I1003 18:39:14.322884   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:39:14.323004   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:39:14.323072   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:39:14.323162   64909 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:39:14.323237   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:39:14.323335   64909 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:39:14.323415   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:39:14.323456   64909 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:39:14.323557   64909 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:39:14.323652   64909 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:39:14.323702   64909 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001540709s
	I1003 18:39:14.323792   64909 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:39:14.323860   64909 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1003 18:39:14.323946   64909 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:39:14.324043   64909 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:39:14.324124   64909 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	I1003 18:39:14.324186   64909 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	I1003 18:39:14.324248   64909 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	I1003 18:39:14.324258   64909 kubeadm.go:318] 
	I1003 18:39:14.324352   64909 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 18:39:14.324439   64909 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:39:14.324519   64909 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 18:39:14.324595   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 18:39:14.324687   64909 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 18:39:14.324773   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 18:39:14.324799   64909 kubeadm.go:318] 
	I1003 18:39:14.324836   64909 kubeadm.go:402] duration metric: took 8m9.327461574s to StartCluster
	I1003 18:39:14.324877   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:39:14.324935   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:39:14.352551   64909 cri.go:89] found id: ""
	I1003 18:39:14.352594   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.352608   64909 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:39:14.352617   64909 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:39:14.352684   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:39:14.376604   64909 cri.go:89] found id: ""
	I1003 18:39:14.376629   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.376638   64909 logs.go:284] No container was found matching "etcd"
	I1003 18:39:14.376643   64909 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:39:14.376750   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:39:14.401480   64909 cri.go:89] found id: ""
	I1003 18:39:14.401504   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.401512   64909 logs.go:284] No container was found matching "coredns"
	I1003 18:39:14.401517   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:39:14.401582   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:39:14.426822   64909 cri.go:89] found id: ""
	I1003 18:39:14.426858   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.426871   64909 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:39:14.426879   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:39:14.426946   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:39:14.451679   64909 cri.go:89] found id: ""
	I1003 18:39:14.451710   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.451722   64909 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:39:14.451730   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:39:14.451787   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:39:14.477253   64909 cri.go:89] found id: ""
	I1003 18:39:14.477275   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.477282   64909 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:39:14.477288   64909 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:39:14.477332   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:39:14.501586   64909 cri.go:89] found id: ""
	I1003 18:39:14.501613   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.501621   64909 logs.go:284] No container was found matching "kindnet"
	I1003 18:39:14.501632   64909 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:39:14.501643   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:39:14.561285   64909 logs.go:123] Gathering logs for container status ...
	I1003 18:39:14.561318   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:39:14.589589   64909 logs.go:123] Gathering logs for kubelet ...
	I1003 18:39:14.589614   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:39:14.656775   64909 logs.go:123] Gathering logs for dmesg ...
	I1003 18:39:14.656809   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:39:14.668000   64909 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:39:14.668023   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:39:14.725446   64909 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:39:14.718419    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.718941    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720510    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720909    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.722416    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:39:14.718419    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.718941    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720510    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720909    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.722416    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1003 18:39:14.725478   64909 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	W1003 18:39:14.725530   64909 out.go:285] * 
	W1003 18:39:14.725612   64909 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:39:14.725629   64909 out.go:285] * 
	W1003 18:39:14.727399   64909 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:39:14.731087   64909 out.go:203] 
	W1003 18:39:14.732560   64909 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:39:14.732585   64909 out.go:285] * 
	I1003 18:39:14.734183   64909 out.go:203] 
	
	
	==> CRI-O <==
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.92887498Z" level=info msg="createCtr: deleting container e2f4b8a4b4eb69392834fbdf154cc4c03d0594e25846b955a947d26192dbeeb2 from storage" id=a270eb16-d817-4f6a-a2b8-ec941dc0bda5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.930691085Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-422561_kube-system_6ecf19dd95945fcfeaff027fad95c1ee_0" id=6406fac4-1b44-4912-9cfd-8fddc1257c83 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:53 ha-422561 crio[781]: time="2025-10-03T18:40:53.931071975Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-422561_kube-system_e643a03771f1e72f527532eff2c66a9c_0" id=a270eb16-d817-4f6a-a2b8-ec941dc0bda5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:59 ha-422561 crio[781]: time="2025-10-03T18:40:59.896676834Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=e80e16c5-7dfd-4ef8-8e6e-03529915c036 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:40:59 ha-422561 crio[781]: time="2025-10-03T18:40:59.897703607Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=ab02f174-cd35-4736-bdbe-87f2667b0304 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:40:59 ha-422561 crio[781]: time="2025-10-03T18:40:59.898700973Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-422561/kube-scheduler" id=ecc4b094-691c-4346-a73a-eeac7390351e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:59 ha-422561 crio[781]: time="2025-10-03T18:40:59.898955098Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:59 ha-422561 crio[781]: time="2025-10-03T18:40:59.905326953Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:59 ha-422561 crio[781]: time="2025-10-03T18:40:59.905819116Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:40:59 ha-422561 crio[781]: time="2025-10-03T18:40:59.922149617Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ecc4b094-691c-4346-a73a-eeac7390351e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:59 ha-422561 crio[781]: time="2025-10-03T18:40:59.923762328Z" level=info msg="createCtr: deleting container ID 880a7af357f6d938b9e8b04a18572fad3e03fe95c344cf4ac4173218c4b843f1 from idIndex" id=ecc4b094-691c-4346-a73a-eeac7390351e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:59 ha-422561 crio[781]: time="2025-10-03T18:40:59.923808838Z" level=info msg="createCtr: removing container 880a7af357f6d938b9e8b04a18572fad3e03fe95c344cf4ac4173218c4b843f1" id=ecc4b094-691c-4346-a73a-eeac7390351e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:59 ha-422561 crio[781]: time="2025-10-03T18:40:59.923852046Z" level=info msg="createCtr: deleting container 880a7af357f6d938b9e8b04a18572fad3e03fe95c344cf4ac4173218c4b843f1 from storage" id=ecc4b094-691c-4346-a73a-eeac7390351e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:40:59 ha-422561 crio[781]: time="2025-10-03T18:40:59.926254523Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-422561_kube-system_2640157afe5e174d7402164688eed7be_0" id=ecc4b094-691c-4346-a73a-eeac7390351e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:41:00 ha-422561 crio[781]: time="2025-10-03T18:41:00.897017726Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=0027b3c4-77ab-48fb-a89f-4380db8ee9de name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:41:00 ha-422561 crio[781]: time="2025-10-03T18:41:00.897988877Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=70d3d94a-c69c-4478-a8f6-6986310e9ed7 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:41:00 ha-422561 crio[781]: time="2025-10-03T18:41:00.898767019Z" level=info msg="Creating container: kube-system/etcd-ha-422561/etcd" id=667f27b4-e0b0-451c-8773-94f06a05f0ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:41:00 ha-422561 crio[781]: time="2025-10-03T18:41:00.899017588Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:41:00 ha-422561 crio[781]: time="2025-10-03T18:41:00.903258928Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:41:00 ha-422561 crio[781]: time="2025-10-03T18:41:00.903685925Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:41:00 ha-422561 crio[781]: time="2025-10-03T18:41:00.91703823Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=667f27b4-e0b0-451c-8773-94f06a05f0ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:41:00 ha-422561 crio[781]: time="2025-10-03T18:41:00.918415805Z" level=info msg="createCtr: deleting container ID b4526d68c8c216dd4eec9b76e9914849b06e2490525771303937a8dc214d6ca2 from idIndex" id=667f27b4-e0b0-451c-8773-94f06a05f0ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:41:00 ha-422561 crio[781]: time="2025-10-03T18:41:00.918451628Z" level=info msg="createCtr: removing container b4526d68c8c216dd4eec9b76e9914849b06e2490525771303937a8dc214d6ca2" id=667f27b4-e0b0-451c-8773-94f06a05f0ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:41:00 ha-422561 crio[781]: time="2025-10-03T18:41:00.918487076Z" level=info msg="createCtr: deleting container b4526d68c8c216dd4eec9b76e9914849b06e2490525771303937a8dc214d6ca2 from storage" id=667f27b4-e0b0-451c-8773-94f06a05f0ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:41:00 ha-422561 crio[781]: time="2025-10-03T18:41:00.920776003Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-422561_kube-system_6803106e6cb30e1b9b282ce29772fddf_0" id=667f27b4-e0b0-451c-8773-94f06a05f0ee name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:41:01.366194    4235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:41:01.366760    4235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:41:01.368410    4235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:41:01.369009    4235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:41:01.370578    4235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 3 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001870] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.374530] i8042: Warning: Keylock active
	[  +0.010846] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003424] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000658] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000637] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000691] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.479345] block sda: the capability attribute has been deprecated.
	[  +0.086934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025583] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.992810] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:41:01 up  1:23,  0 user,  load average: 0.70, 0.21, 0.11
	Linux ha-422561 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 03 18:40:53 ha-422561 kubelet[1961]:  > podSandboxID="2bca45b92f4f55f540f80dd9d8d3d282362f7f0ecce2ac4786e27a3b4a9cfd4d"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.931391    1961 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:40:53 ha-422561 kubelet[1961]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-422561_kube-system(e643a03771f1e72f527532eff2c66a9c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:40:53 ha-422561 kubelet[1961]:  > logger="UnhandledError"
	Oct 03 18:40:53 ha-422561 kubelet[1961]: E1003 18:40:53.932375    1961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-422561" podUID="e643a03771f1e72f527532eff2c66a9c"
	Oct 03 18:40:55 ha-422561 kubelet[1961]: E1003 18:40:55.536511    1961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-422561?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 03 18:40:55 ha-422561 kubelet[1961]: I1003 18:40:55.697332    1961 kubelet_node_status.go:75] "Attempting to register node" node="ha-422561"
	Oct 03 18:40:55 ha-422561 kubelet[1961]: E1003 18:40:55.697723    1961 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-422561"
	Oct 03 18:40:58 ha-422561 kubelet[1961]: E1003 18:40:58.954263    1961 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 03 18:40:59 ha-422561 kubelet[1961]: E1003 18:40:59.896089    1961 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:40:59 ha-422561 kubelet[1961]: E1003 18:40:59.926554    1961 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:40:59 ha-422561 kubelet[1961]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:40:59 ha-422561 kubelet[1961]:  > podSandboxID="a10975bd62b256134c3b4cd528b6d141353311ccb4309c6a5b3dea224dc6ecb8"
	Oct 03 18:40:59 ha-422561 kubelet[1961]: E1003 18:40:59.926651    1961 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:40:59 ha-422561 kubelet[1961]:         container kube-scheduler start failed in pod kube-scheduler-ha-422561_kube-system(2640157afe5e174d7402164688eed7be): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:40:59 ha-422561 kubelet[1961]:  > logger="UnhandledError"
	Oct 03 18:40:59 ha-422561 kubelet[1961]: E1003 18:40:59.926686    1961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-422561" podUID="2640157afe5e174d7402164688eed7be"
	Oct 03 18:41:00 ha-422561 kubelet[1961]: E1003 18:41:00.896565    1961 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:41:00 ha-422561 kubelet[1961]: E1003 18:41:00.921105    1961 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:41:00 ha-422561 kubelet[1961]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:41:00 ha-422561 kubelet[1961]:  > podSandboxID="d8c61f11856eaf647667c61ede204d0da4f897662d4f66aa1405fe26a28a98f5"
	Oct 03 18:41:00 ha-422561 kubelet[1961]: E1003 18:41:00.921221    1961 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:41:00 ha-422561 kubelet[1961]:         container etcd start failed in pod etcd-ha-422561_kube-system(6803106e6cb30e1b9b282ce29772fddf): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:41:00 ha-422561 kubelet[1961]:  > logger="UnhandledError"
	Oct 03 18:41:00 ha-422561 kubelet[1961]: E1003 18:41:00.921262    1961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-422561" podUID="6803106e6cb30e1b9b282ce29772fddf"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561: exit status 6 (305.250624ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:41:01.751614   75488 status.go:458] kubeconfig endpoint: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-422561" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (43.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 node start m02 --alsologtostderr -v 5: exit status 85 (64.554868ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:41:01.818137   75599 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:41:01.818422   75599 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:41:01.818432   75599 out.go:374] Setting ErrFile to fd 2...
	I1003 18:41:01.818436   75599 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:41:01.818643   75599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:41:01.818888   75599 mustload.go:65] Loading cluster: ha-422561
	I1003 18:41:01.819197   75599 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:41:01.821222   75599 out.go:203] 
	W1003 18:41:01.822476   75599 out.go:285] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1003 18:41:01.822489   75599 out.go:285] * 
	* 
	W1003 18:41:01.825608   75599 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:41:01.826956   75599 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:424: I1003 18:41:01.818137   75599 out.go:360] Setting OutFile to fd 1 ...
I1003 18:41:01.818422   75599 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 18:41:01.818432   75599 out.go:374] Setting ErrFile to fd 2...
I1003 18:41:01.818436   75599 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 18:41:01.818643   75599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
I1003 18:41:01.818888   75599 mustload.go:65] Loading cluster: ha-422561
I1003 18:41:01.819197   75599 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1003 18:41:01.821222   75599 out.go:203] 
W1003 18:41:01.822476   75599 out.go:285] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W1003 18:41:01.822489   75599 out.go:285] * 
* 
W1003 18:41:01.825608   75599 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1003 18:41:01.826956   75599 out.go:203] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-linux-amd64 -p ha-422561 node start m02 --alsologtostderr -v 5": exit status 85
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 status --alsologtostderr -v 5: exit status 6 (289.020971ms)

                                                
                                                
-- stdout --
	ha-422561
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:41:01.882448   75610 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:41:01.882707   75610 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:41:01.882715   75610 out.go:374] Setting ErrFile to fd 2...
	I1003 18:41:01.882719   75610 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:41:01.882894   75610 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:41:01.883071   75610 out.go:368] Setting JSON to false
	I1003 18:41:01.883099   75610 mustload.go:65] Loading cluster: ha-422561
	I1003 18:41:01.883138   75610 notify.go:220] Checking for updates...
	I1003 18:41:01.883401   75610 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:41:01.883414   75610 status.go:174] checking status of ha-422561 ...
	I1003 18:41:01.883808   75610 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:41:01.901699   75610 status.go:371] ha-422561 host status = "Running" (err=<nil>)
	I1003 18:41:01.901719   75610 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:41:01.901957   75610 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:41:01.918028   75610 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:41:01.918277   75610 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:41:01.918336   75610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:01.935111   75610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:02.034223   75610 ssh_runner.go:195] Run: systemctl --version
	I1003 18:41:02.040967   75610 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:41:02.053014   75610 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:41:02.105937   75610 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-03 18:41:02.096296992 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1003 18:41:02.106346   75610 status.go:458] kubeconfig endpoint: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:41:02.106369   75610 api_server.go:166] Checking apiserver status ...
	I1003 18:41:02.106399   75610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1003 18:41:02.116665   75610 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:41:02.116686   75610 status.go:463] ha-422561 apiserver status = Running (err=<nil>)
	I1003 18:41:02.116699   75610 status.go:176] ha-422561 status: &{Name:ha-422561 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1003 18:41:02.121465   12212 retry.go:31] will retry after 505.742385ms: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 status --alsologtostderr -v 5: exit status 6 (289.602885ms)

                                                
                                                
-- stdout --
	ha-422561
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:41:02.678799   75738 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:41:02.679014   75738 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:41:02.679022   75738 out.go:374] Setting ErrFile to fd 2...
	I1003 18:41:02.679026   75738 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:41:02.679204   75738 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:41:02.679353   75738 out.go:368] Setting JSON to false
	I1003 18:41:02.679383   75738 mustload.go:65] Loading cluster: ha-422561
	I1003 18:41:02.679432   75738 notify.go:220] Checking for updates...
	I1003 18:41:02.679706   75738 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:41:02.679718   75738 status.go:174] checking status of ha-422561 ...
	I1003 18:41:02.680138   75738 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:41:02.700454   75738 status.go:371] ha-422561 host status = "Running" (err=<nil>)
	I1003 18:41:02.700486   75738 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:41:02.700705   75738 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:41:02.717099   75738 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:41:02.717318   75738 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:41:02.717354   75738 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:02.734003   75738 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:02.831866   75738 ssh_runner.go:195] Run: systemctl --version
	I1003 18:41:02.837868   75738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:41:02.849342   75738 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:41:02.902835   75738 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-03 18:41:02.892335455 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1003 18:41:02.903278   75738 status.go:458] kubeconfig endpoint: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:41:02.903305   75738 api_server.go:166] Checking apiserver status ...
	I1003 18:41:02.903334   75738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1003 18:41:02.913003   75738 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:41:02.913022   75738 status.go:463] ha-422561 apiserver status = Running (err=<nil>)
	I1003 18:41:02.913031   75738 status.go:176] ha-422561 status: &{Name:ha-422561 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1003 18:41:02.918116   12212 retry.go:31] will retry after 1.688023665s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 status --alsologtostderr -v 5: exit status 6 (284.983708ms)

                                                
                                                
-- stdout --
	ha-422561
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:41:04.657229   75850 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:41:04.657473   75850 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:41:04.657481   75850 out.go:374] Setting ErrFile to fd 2...
	I1003 18:41:04.657485   75850 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:41:04.657670   75850 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:41:04.657809   75850 out.go:368] Setting JSON to false
	I1003 18:41:04.657844   75850 mustload.go:65] Loading cluster: ha-422561
	I1003 18:41:04.657915   75850 notify.go:220] Checking for updates...
	I1003 18:41:04.658308   75850 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:41:04.658339   75850 status.go:174] checking status of ha-422561 ...
	I1003 18:41:04.658808   75850 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:41:04.676138   75850 status.go:371] ha-422561 host status = "Running" (err=<nil>)
	I1003 18:41:04.676158   75850 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:41:04.676390   75850 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:41:04.692799   75850 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:41:04.693076   75850 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:41:04.693120   75850 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:04.709426   75850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:04.805863   75850 ssh_runner.go:195] Run: systemctl --version
	I1003 18:41:04.811848   75850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:41:04.823535   75850 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:41:04.876678   75850 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-03 18:41:04.866957714 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1003 18:41:04.877224   75850 status.go:458] kubeconfig endpoint: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:41:04.877261   75850 api_server.go:166] Checking apiserver status ...
	I1003 18:41:04.877302   75850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1003 18:41:04.887061   75850 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:41:04.887083   75850 status.go:463] ha-422561 apiserver status = Running (err=<nil>)
	I1003 18:41:04.887096   75850 status.go:176] ha-422561 status: &{Name:ha-422561 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1003 18:41:04.892236   12212 retry.go:31] will retry after 1.15109953s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 status --alsologtostderr -v 5: exit status 6 (287.846659ms)

                                                
                                                
-- stdout --
	ha-422561
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:41:06.095504   75968 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:41:06.095840   75968 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:41:06.095854   75968 out.go:374] Setting ErrFile to fd 2...
	I1003 18:41:06.095860   75968 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:41:06.096344   75968 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:41:06.096839   75968 out.go:368] Setting JSON to false
	I1003 18:41:06.096998   75968 mustload.go:65] Loading cluster: ha-422561
	I1003 18:41:06.097018   75968 notify.go:220] Checking for updates...
	I1003 18:41:06.097402   75968 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:41:06.097422   75968 status.go:174] checking status of ha-422561 ...
	I1003 18:41:06.097940   75968 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:41:06.116454   75968 status.go:371] ha-422561 host status = "Running" (err=<nil>)
	I1003 18:41:06.116472   75968 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:41:06.116731   75968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:41:06.133469   75968 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:41:06.133770   75968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:41:06.133830   75968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:06.150433   75968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:06.247971   75968 ssh_runner.go:195] Run: systemctl --version
	I1003 18:41:06.254063   75968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:41:06.265590   75968 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:41:06.317719   75968 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-03 18:41:06.308162721 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1003 18:41:06.318178   75968 status.go:458] kubeconfig endpoint: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:41:06.318210   75968 api_server.go:166] Checking apiserver status ...
	I1003 18:41:06.318255   75968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1003 18:41:06.327931   75968 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:41:06.327949   75968 status.go:463] ha-422561 apiserver status = Running (err=<nil>)
	I1003 18:41:06.327958   75968 status.go:176] ha-422561 status: &{Name:ha-422561 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1003 18:41:06.332594   12212 retry.go:31] will retry after 3.999791623s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 status --alsologtostderr -v 5: exit status 6 (287.932667ms)

                                                
                                                
-- stdout --
	ha-422561
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:41:10.387482   76099 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:41:10.387605   76099 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:41:10.387614   76099 out.go:374] Setting ErrFile to fd 2...
	I1003 18:41:10.387618   76099 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:41:10.387805   76099 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:41:10.388000   76099 out.go:368] Setting JSON to false
	I1003 18:41:10.388034   76099 mustload.go:65] Loading cluster: ha-422561
	I1003 18:41:10.388097   76099 notify.go:220] Checking for updates...
	I1003 18:41:10.388370   76099 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:41:10.388385   76099 status.go:174] checking status of ha-422561 ...
	I1003 18:41:10.388865   76099 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:41:10.406903   76099 status.go:371] ha-422561 host status = "Running" (err=<nil>)
	I1003 18:41:10.406925   76099 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:41:10.407269   76099 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:41:10.424463   76099 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:41:10.424801   76099 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:41:10.424860   76099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:10.441359   76099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:10.539917   76099 ssh_runner.go:195] Run: systemctl --version
	I1003 18:41:10.545875   76099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:41:10.557886   76099 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:41:10.609418   76099 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-03 18:41:10.599375592 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1003 18:41:10.609871   76099 status.go:458] kubeconfig endpoint: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:41:10.609909   76099 api_server.go:166] Checking apiserver status ...
	I1003 18:41:10.609946   76099 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1003 18:41:10.619444   76099 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:41:10.619463   76099 status.go:463] ha-422561 apiserver status = Running (err=<nil>)
	I1003 18:41:10.619476   76099 status.go:176] ha-422561 status: &{Name:ha-422561 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1003 18:41:10.623912   12212 retry.go:31] will retry after 7.324353815s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 status --alsologtostderr -v 5: exit status 6 (290.149736ms)

                                                
                                                
-- stdout --
	ha-422561
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:41:18.005487   76256 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:41:18.005745   76256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:41:18.005755   76256 out.go:374] Setting ErrFile to fd 2...
	I1003 18:41:18.005759   76256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:41:18.005960   76256 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:41:18.006131   76256 out.go:368] Setting JSON to false
	I1003 18:41:18.006165   76256 mustload.go:65] Loading cluster: ha-422561
	I1003 18:41:18.006235   76256 notify.go:220] Checking for updates...
	I1003 18:41:18.006680   76256 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:41:18.006698   76256 status.go:174] checking status of ha-422561 ...
	I1003 18:41:18.007208   76256 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:41:18.025117   76256 status.go:371] ha-422561 host status = "Running" (err=<nil>)
	I1003 18:41:18.025157   76256 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:41:18.025527   76256 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:41:18.044743   76256 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:41:18.044992   76256 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:41:18.045036   76256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:18.062407   76256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:18.159887   76256 ssh_runner.go:195] Run: systemctl --version
	I1003 18:41:18.165773   76256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:41:18.177176   76256 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:41:18.228922   76256 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-03 18:41:18.219117849 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1003 18:41:18.229427   76256 status.go:458] kubeconfig endpoint: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:41:18.229451   76256 api_server.go:166] Checking apiserver status ...
	I1003 18:41:18.229496   76256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1003 18:41:18.239324   76256 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:41:18.239343   76256 status.go:463] ha-422561 apiserver status = Running (err=<nil>)
	I1003 18:41:18.239355   76256 status.go:176] ha-422561 status: &{Name:ha-422561 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1003 18:41:18.244533   12212 retry.go:31] will retry after 6.223485573s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 status --alsologtostderr -v 5: exit status 6 (290.475409ms)

                                                
                                                
-- stdout --
	ha-422561
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:41:24.522416   76395 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:41:24.522694   76395 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:41:24.522704   76395 out.go:374] Setting ErrFile to fd 2...
	I1003 18:41:24.522708   76395 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:41:24.522967   76395 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:41:24.523194   76395 out.go:368] Setting JSON to false
	I1003 18:41:24.523230   76395 mustload.go:65] Loading cluster: ha-422561
	I1003 18:41:24.523354   76395 notify.go:220] Checking for updates...
	I1003 18:41:24.523704   76395 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:41:24.523719   76395 status.go:174] checking status of ha-422561 ...
	I1003 18:41:24.524246   76395 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:41:24.541997   76395 status.go:371] ha-422561 host status = "Running" (err=<nil>)
	I1003 18:41:24.542024   76395 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:41:24.542294   76395 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:41:24.561042   76395 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:41:24.561301   76395 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:41:24.561365   76395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:24.578327   76395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:24.675969   76395 ssh_runner.go:195] Run: systemctl --version
	I1003 18:41:24.682181   76395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:41:24.694027   76395 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:41:24.746147   76395 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-03 18:41:24.735874946 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1003 18:41:24.746553   76395 status.go:458] kubeconfig endpoint: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:41:24.746575   76395 api_server.go:166] Checking apiserver status ...
	I1003 18:41:24.746604   76395 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1003 18:41:24.756233   76395 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:41:24.756266   76395 status.go:463] ha-422561 apiserver status = Running (err=<nil>)
	I1003 18:41:24.756279   76395 status.go:176] ha-422561 status: &{Name:ha-422561 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1003 18:41:24.761463   12212 retry.go:31] will retry after 9.117348176s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 status --alsologtostderr -v 5: exit status 6 (297.89309ms)

                                                
                                                
-- stdout --
	ha-422561
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:41:33.932494   76551 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:41:33.932960   76551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:41:33.932971   76551 out.go:374] Setting ErrFile to fd 2...
	I1003 18:41:33.932989   76551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:41:33.933169   76551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:41:33.933355   76551 out.go:368] Setting JSON to false
	I1003 18:41:33.933394   76551 mustload.go:65] Loading cluster: ha-422561
	I1003 18:41:33.933508   76551 notify.go:220] Checking for updates...
	I1003 18:41:33.933788   76551 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:41:33.933805   76551 status.go:174] checking status of ha-422561 ...
	I1003 18:41:33.934252   76551 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:41:33.953225   76551 status.go:371] ha-422561 host status = "Running" (err=<nil>)
	I1003 18:41:33.953253   76551 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:41:33.953470   76551 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:41:33.972833   76551 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:41:33.973142   76551 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:41:33.973194   76551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:33.989820   76551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:34.088059   76551 ssh_runner.go:195] Run: systemctl --version
	I1003 18:41:34.094154   76551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:41:34.105689   76551 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:41:34.163269   76551 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-03 18:41:34.152713367 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1003 18:41:34.163825   76551 status.go:458] kubeconfig endpoint: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:41:34.163854   76551 api_server.go:166] Checking apiserver status ...
	I1003 18:41:34.163903   76551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1003 18:41:34.173545   76551 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:41:34.173565   76551 status.go:463] ha-422561 apiserver status = Running (err=<nil>)
	I1003 18:41:34.173579   76551 status.go:176] ha-422561 status: &{Name:ha-422561 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1003 18:41:34.178628   12212 retry.go:31] will retry after 9.378165033s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 status --alsologtostderr -v 5: exit status 6 (287.883725ms)

                                                
                                                
-- stdout --
	ha-422561
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:41:43.610347   76721 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:41:43.610585   76721 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:41:43.610593   76721 out.go:374] Setting ErrFile to fd 2...
	I1003 18:41:43.610597   76721 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:41:43.610798   76721 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:41:43.610995   76721 out.go:368] Setting JSON to false
	I1003 18:41:43.611035   76721 mustload.go:65] Loading cluster: ha-422561
	I1003 18:41:43.611128   76721 notify.go:220] Checking for updates...
	I1003 18:41:43.611353   76721 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:41:43.611367   76721 status.go:174] checking status of ha-422561 ...
	I1003 18:41:43.611778   76721 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:41:43.629292   76721 status.go:371] ha-422561 host status = "Running" (err=<nil>)
	I1003 18:41:43.629336   76721 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:41:43.629592   76721 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:41:43.647032   76721 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:41:43.647252   76721 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:41:43.647286   76721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:43.663932   76721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:43.760886   76721 ssh_runner.go:195] Run: systemctl --version
	I1003 18:41:43.766716   76721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:41:43.778379   76721 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:41:43.831139   76721 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-03 18:41:43.820378881 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1003 18:41:43.831553   76721 status.go:458] kubeconfig endpoint: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:41:43.831575   76721 api_server.go:166] Checking apiserver status ...
	I1003 18:41:43.831605   76721 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1003 18:41:43.841310   76721 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:41:43.841333   76721 status.go:463] ha-422561 apiserver status = Running (err=<nil>)
	I1003 18:41:43.841346   76721 status.go:176] ha-422561 status: &{Name:ha-422561 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:434: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-422561 status --alsologtostderr -v 5" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-422561
helpers_test.go:243: (dbg) docker inspect ha-422561:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	        "Created": "2025-10-03T18:31:00.396132938Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 65481,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T18:31:00.428325646Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hostname",
	        "HostsPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hosts",
	        "LogPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512-json.log",
	        "Name": "/ha-422561",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-422561:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-422561",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	                "LowerDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94-init/diff:/var/lib/docker/overlay2/6a517a7375440eba803d7b83fe1e0821915758396dd4d8556ab64fff322a60c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-422561",
	                "Source": "/var/lib/docker/volumes/ha-422561/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-422561",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-422561",
	                "name.minikube.sigs.k8s.io": "ha-422561",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3084976d568ce061948ebe671f279a80502b1d28417f2be7c2497961eac2a5aa",
	            "SandboxKey": "/var/run/docker/netns/3084976d568c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-422561": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:e4:3c:eb:d3:38",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de6aa7ca29f453c0d15cb280abde7ee215f554c89e78e3db8a0f7590468114b5",
	                    "EndpointID": "1b961733d045b77a64efb8afa6caa273125f56ec888f823b790f5454f23ca3b7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-422561",
	                        "eef8fc426b2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561: exit status 6 (291.852555ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:41:44.140999   76842 status.go:458] kubeconfig endpoint: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-889240 image ls --format table --alsologtostderr                                                     │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image   │ functional-889240 image build -t localhost/my-image:functional-889240 testdata/build --alsologtostderr          │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:27 UTC │
	│ image   │ functional-889240 image ls                                                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:27 UTC │ 03 Oct 25 18:27 UTC │
	│ delete  │ -p functional-889240                                                                                            │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:30 UTC │ 03 Oct 25 18:30 UTC │
	│ start   │ ha-422561 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:30 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- rollout status deployment/busybox                                                          │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node add --alsologtostderr -v 5                                                                       │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node stop m02 --alsologtostderr -v 5                                                                  │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node start m02 --alsologtostderr -v 5                                                                 │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:41 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:30:55
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:30:55.351405   64909 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:30:55.351662   64909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:55.351671   64909 out.go:374] Setting ErrFile to fd 2...
	I1003 18:30:55.351675   64909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:55.351854   64909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:30:55.352339   64909 out.go:368] Setting JSON to false
	I1003 18:30:55.353203   64909 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4406,"bootTime":1759511849,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:30:55.353289   64909 start.go:140] virtualization: kvm guest
	I1003 18:30:55.355458   64909 out.go:179] * [ha-422561] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 18:30:55.356815   64909 notify.go:220] Checking for updates...
	I1003 18:30:55.356884   64909 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:30:55.358389   64909 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:30:55.359964   64909 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:30:55.361351   64909 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:30:55.362647   64909 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:30:55.363956   64909 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:30:55.365351   64909 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:30:55.387768   64909 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:30:55.387885   64909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:30:55.443407   64909 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:30:55.433728571 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:30:55.443516   64909 docker.go:318] overlay module found
	I1003 18:30:55.445440   64909 out.go:179] * Using the docker driver based on user configuration
	I1003 18:30:55.446777   64909 start.go:304] selected driver: docker
	I1003 18:30:55.446793   64909 start.go:924] validating driver "docker" against <nil>
	I1003 18:30:55.446808   64909 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:30:55.447403   64909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:30:55.498777   64909 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:30:55.489521827 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:30:55.498958   64909 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1003 18:30:55.499206   64909 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:30:55.501187   64909 out.go:179] * Using Docker driver with root privileges
	I1003 18:30:55.502312   64909 cni.go:84] Creating CNI manager for ""
	I1003 18:30:55.502386   64909 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1003 18:30:55.502397   64909 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 18:30:55.502459   64909 start.go:348] cluster config:
	{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1003 18:30:55.503779   64909 out.go:179] * Starting "ha-422561" primary control-plane node in "ha-422561" cluster
	I1003 18:30:55.504816   64909 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:30:55.506028   64909 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:30:55.507131   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:30:55.507167   64909 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 18:30:55.507169   64909 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:30:55.507175   64909 cache.go:58] Caching tarball of preloaded images
	I1003 18:30:55.507294   64909 preload.go:233] Found /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1003 18:30:55.507311   64909 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 18:30:55.507736   64909 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:30:55.507764   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json: {Name:mk1ece959bac74a473416f0dfc8af04a6136d7b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:30:55.527458   64909 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 18:30:55.527478   64909 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 18:30:55.527494   64909 cache.go:232] Successfully downloaded all kic artifacts
	I1003 18:30:55.527527   64909 start.go:360] acquireMachinesLock for ha-422561: {Name:mk32fd04a5d9b5f89831583bab7d7527f4d187a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:30:55.527631   64909 start.go:364] duration metric: took 81.336µs to acquireMachinesLock for "ha-422561"
	I1003 18:30:55.527657   64909 start.go:93] Provisioning new machine with config: &{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 18:30:55.527748   64909 start.go:125] createHost starting for "" (driver="docker")
	I1003 18:30:55.529663   64909 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1003 18:30:55.529898   64909 start.go:159] libmachine.API.Create for "ha-422561" (driver="docker")
	I1003 18:30:55.529933   64909 client.go:168] LocalClient.Create starting
	I1003 18:30:55.530028   64909 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem
	I1003 18:30:55.530072   64909 main.go:141] libmachine: Decoding PEM data...
	I1003 18:30:55.530097   64909 main.go:141] libmachine: Parsing certificate...
	I1003 18:30:55.530187   64909 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem
	I1003 18:30:55.530226   64909 main.go:141] libmachine: Decoding PEM data...
	I1003 18:30:55.530238   64909 main.go:141] libmachine: Parsing certificate...
	I1003 18:30:55.530612   64909 cli_runner.go:164] Run: docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 18:30:55.547068   64909 cli_runner.go:211] docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 18:30:55.547129   64909 network_create.go:284] running [docker network inspect ha-422561] to gather additional debugging logs...
	I1003 18:30:55.547146   64909 cli_runner.go:164] Run: docker network inspect ha-422561
	W1003 18:30:55.563141   64909 cli_runner.go:211] docker network inspect ha-422561 returned with exit code 1
	I1003 18:30:55.563167   64909 network_create.go:287] error running [docker network inspect ha-422561]: docker network inspect ha-422561: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-422561 not found
	I1003 18:30:55.563179   64909 network_create.go:289] output of [docker network inspect ha-422561]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-422561 not found
	
	** /stderr **
	I1003 18:30:55.563276   64909 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:30:55.579301   64909 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00157b3a0}
	I1003 18:30:55.579336   64909 network_create.go:124] attempt to create docker network ha-422561 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1003 18:30:55.579388   64909 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-422561 ha-422561
	I1003 18:30:55.634233   64909 network_create.go:108] docker network ha-422561 192.168.49.0/24 created
	I1003 18:30:55.634260   64909 kic.go:121] calculated static IP "192.168.49.2" for the "ha-422561" container
	I1003 18:30:55.634318   64909 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 18:30:55.649960   64909 cli_runner.go:164] Run: docker volume create ha-422561 --label name.minikube.sigs.k8s.io=ha-422561 --label created_by.minikube.sigs.k8s.io=true
	I1003 18:30:55.667186   64909 oci.go:103] Successfully created a docker volume ha-422561
	I1003 18:30:55.667250   64909 cli_runner.go:164] Run: docker run --rm --name ha-422561-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-422561 --entrypoint /usr/bin/test -v ha-422561:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1003 18:30:56.041615   64909 oci.go:107] Successfully prepared a docker volume ha-422561
	I1003 18:30:56.041648   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:30:56.041669   64909 kic.go:194] Starting extracting preloaded images to volume ...
	I1003 18:30:56.041727   64909 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-422561:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1003 18:31:00.326417   64909 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-422561:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.284654466s)
	I1003 18:31:00.326457   64909 kic.go:203] duration metric: took 4.284784967s to extract preloaded images to volume ...
	W1003 18:31:00.326567   64909 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1003 18:31:00.326610   64909 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1003 18:31:00.326657   64909 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1003 18:31:00.381592   64909 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-422561 --name ha-422561 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-422561 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-422561 --network ha-422561 --ip 192.168.49.2 --volume ha-422561:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1003 18:31:00.641348   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Running}}
	I1003 18:31:00.659876   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:00.678319   64909 cli_runner.go:164] Run: docker exec ha-422561 stat /var/lib/dpkg/alternatives/iptables
	I1003 18:31:00.728414   64909 oci.go:144] the created container "ha-422561" has a running status.
	I1003 18:31:00.728450   64909 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa...
	I1003 18:31:01.103610   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1003 18:31:01.103663   64909 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1003 18:31:01.128670   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:01.147200   64909 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1003 18:31:01.147218   64909 kic_runner.go:114] Args: [docker exec --privileged ha-422561 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1003 18:31:01.189023   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:01.207395   64909 machine.go:93] provisionDockerMachine start ...
	I1003 18:31:01.207497   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.226029   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.226282   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.226299   64909 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 18:31:01.372245   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:31:01.372275   64909 ubuntu.go:182] provisioning hostname "ha-422561"
	I1003 18:31:01.372335   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.390674   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.390889   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.390902   64909 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-422561 && echo "ha-422561" | sudo tee /etc/hostname
	I1003 18:31:01.544850   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:31:01.544932   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.563695   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.563966   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.564014   64909 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422561' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422561/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422561' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:31:01.708942   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:31:01.708971   64909 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8669/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8669/.minikube}
	I1003 18:31:01.709036   64909 ubuntu.go:190] setting up certificates
	I1003 18:31:01.709048   64909 provision.go:84] configureAuth start
	I1003 18:31:01.709101   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:01.727778   64909 provision.go:143] copyHostCerts
	I1003 18:31:01.727814   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:31:01.727849   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem, removing ...
	I1003 18:31:01.727858   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:31:01.727940   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem (1082 bytes)
	I1003 18:31:01.728054   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:31:01.728079   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem, removing ...
	I1003 18:31:01.728090   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:31:01.728137   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem (1123 bytes)
	I1003 18:31:01.728200   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:31:01.728225   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem, removing ...
	I1003 18:31:01.728234   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:31:01.728266   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem (1675 bytes)
	I1003 18:31:01.728336   64909 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem org=jenkins.ha-422561 san=[127.0.0.1 192.168.49.2 ha-422561 localhost minikube]
	I1003 18:31:01.864219   64909 provision.go:177] copyRemoteCerts
	I1003 18:31:01.864281   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:31:01.864317   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.882069   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:01.982800   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 18:31:01.982877   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1003 18:31:02.000887   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 18:31:02.000952   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 18:31:02.017591   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 18:31:02.017639   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 18:31:02.034172   64909 provision.go:87] duration metric: took 325.10989ms to configureAuth
	I1003 18:31:02.034202   64909 ubuntu.go:206] setting minikube options for container-runtime
	I1003 18:31:02.034393   64909 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:31:02.034508   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.052111   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:02.052326   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:02.052344   64909 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 18:31:02.295594   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 18:31:02.295629   64909 machine.go:96] duration metric: took 1.088207423s to provisionDockerMachine
	I1003 18:31:02.295640   64909 client.go:171] duration metric: took 6.765697238s to LocalClient.Create
	I1003 18:31:02.295660   64909 start.go:167] duration metric: took 6.765761646s to libmachine.API.Create "ha-422561"
	I1003 18:31:02.295669   64909 start.go:293] postStartSetup for "ha-422561" (driver="docker")
	I1003 18:31:02.295682   64909 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:31:02.295752   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:31:02.295789   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.312783   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.414720   64909 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:31:02.418127   64909 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:31:02.418149   64909 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 18:31:02.418159   64909 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/addons for local assets ...
	I1003 18:31:02.418213   64909 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/files for local assets ...
	I1003 18:31:02.418310   64909 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> 122122.pem in /etc/ssl/certs
	I1003 18:31:02.418326   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /etc/ssl/certs/122122.pem
	I1003 18:31:02.418453   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 18:31:02.425623   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:31:02.444405   64909 start.go:296] duration metric: took 148.722871ms for postStartSetup
	I1003 18:31:02.444748   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:02.462226   64909 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:31:02.462456   64909 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:31:02.462495   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.478737   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.575846   64909 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:31:02.580138   64909 start.go:128] duration metric: took 7.052376255s to createHost
	I1003 18:31:02.580160   64909 start.go:83] releasing machines lock for "ha-422561", held for 7.052515614s
	I1003 18:31:02.580230   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:02.596730   64909 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:31:02.596776   64909 ssh_runner.go:195] Run: cat /version.json
	I1003 18:31:02.596798   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.596817   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.613783   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.614183   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.764865   64909 ssh_runner.go:195] Run: systemctl --version
	I1003 18:31:02.771251   64909 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 18:31:02.803643   64909 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 18:31:02.807949   64909 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 18:31:02.808044   64909 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 18:31:02.833024   64909 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 18:31:02.833043   64909 start.go:495] detecting cgroup driver to use...
	I1003 18:31:02.833073   64909 detect.go:190] detected "systemd" cgroup driver on host os
	I1003 18:31:02.833108   64909 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 18:31:02.847613   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 18:31:02.858865   64909 docker.go:218] disabling cri-docker service (if available) ...
	I1003 18:31:02.858910   64909 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 18:31:02.874470   64909 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 18:31:02.890554   64909 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 18:31:02.970342   64909 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 18:31:03.055310   64909 docker.go:234] disabling docker service ...
	I1003 18:31:03.055369   64909 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 18:31:03.072668   64909 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 18:31:03.084308   64909 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 18:31:03.163959   64909 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 18:31:03.241930   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 18:31:03.253863   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:31:03.266905   64909 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 18:31:03.266971   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.276795   64909 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 18:31:03.276848   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.285157   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.293117   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.301070   64909 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:31:03.308489   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.316789   64909 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.329424   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.337651   64909 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:31:03.344839   64909 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:31:03.352026   64909 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:31:03.430894   64909 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 18:31:03.533915   64909 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 18:31:03.534002   64909 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 18:31:03.537783   64909 start.go:563] Will wait 60s for crictl version
	I1003 18:31:03.537838   64909 ssh_runner.go:195] Run: which crictl
	I1003 18:31:03.541393   64909 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 18:31:03.564883   64909 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 18:31:03.564963   64909 ssh_runner.go:195] Run: crio --version
	I1003 18:31:03.591363   64909 ssh_runner.go:195] Run: crio --version
	I1003 18:31:03.619425   64909 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 18:31:03.620466   64909 cli_runner.go:164] Run: docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:31:03.637151   64909 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 18:31:03.641184   64909 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:31:03.651292   64909 kubeadm.go:883] updating cluster {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 18:31:03.651379   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:31:03.651428   64909 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:31:03.680883   64909 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:31:03.680904   64909 crio.go:433] Images already preloaded, skipping extraction
	I1003 18:31:03.680955   64909 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:31:03.706829   64909 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:31:03.706859   64909 cache_images.go:85] Images are preloaded, skipping loading
	I1003 18:31:03.706866   64909 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1003 18:31:03.706953   64909 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422561 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 18:31:03.707032   64909 ssh_runner.go:195] Run: crio config
	I1003 18:31:03.751501   64909 cni.go:84] Creating CNI manager for ""
	I1003 18:31:03.751523   64909 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 18:31:03.751538   64909 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 18:31:03.751558   64909 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-422561 NodeName:ha-422561 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 18:31:03.751669   64909 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-422561"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:31:03.751691   64909 kube-vip.go:115] generating kube-vip config ...
	I1003 18:31:03.751728   64909 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1003 18:31:03.763009   64909 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:31:03.763125   64909 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1003 18:31:03.763181   64909 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 18:31:03.770585   64909 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:31:03.770633   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1003 18:31:03.778069   64909 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1003 18:31:03.790397   64909 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 18:31:03.805112   64909 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1003 18:31:03.817362   64909 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1003 18:31:03.830824   64909 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1003 18:31:03.834300   64909 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:31:03.843861   64909 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:31:03.921407   64909 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:31:03.944431   64909 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561 for IP: 192.168.49.2
	I1003 18:31:03.944451   64909 certs.go:195] generating shared ca certs ...
	I1003 18:31:03.944468   64909 certs.go:227] acquiring lock for ca certs: {Name:mk92d1e8e469cb44d9924ff8abf5ecf0a8ce4e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:03.944607   64909 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key
	I1003 18:31:03.944644   64909 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key
	I1003 18:31:03.944652   64909 certs.go:257] generating profile certs ...
	I1003 18:31:03.944708   64909 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key
	I1003 18:31:03.944722   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt with IP's: []
	I1003 18:31:04.171087   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt ...
	I1003 18:31:04.171118   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt: {Name:mked6cb0f731cbb630d2b187c4975015a458a284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.171291   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key ...
	I1003 18:31:04.171301   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key: {Name:mk0c9f0a0941d99f2af213cd316467f053532c99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.171391   64909 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905
	I1003 18:31:04.171406   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1003 18:31:04.383185   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 ...
	I1003 18:31:04.383218   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905: {Name:mkc24c55d4abb428b3559a93e6e301be2cab703a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.383381   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905 ...
	I1003 18:31:04.383394   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905: {Name:mk0576a73623089a3eecf4e34bbbd214545e2247 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.383486   64909 certs.go:382] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt
	I1003 18:31:04.383601   64909 certs.go:386] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905 -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key
	I1003 18:31:04.383674   64909 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key
	I1003 18:31:04.383689   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt with IP's: []
	I1003 18:31:04.628083   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt ...
	I1003 18:31:04.628112   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt: {Name:mkc19179c67a2559968759165df93d304eb42db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.628269   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key ...
	I1003 18:31:04.628279   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key: {Name:mka8b2392a3d721a70329b852837f3403643f948 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.628347   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 18:31:04.628364   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 18:31:04.628375   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 18:31:04.628384   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 18:31:04.628397   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 18:31:04.628410   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 18:31:04.628430   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 18:31:04.628442   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 18:31:04.628492   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem (1338 bytes)
	W1003 18:31:04.628525   64909 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212_empty.pem, impossibly tiny 0 bytes
	I1003 18:31:04.628535   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:31:04.628558   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:31:04.628580   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:31:04.628601   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem (1675 bytes)
	I1003 18:31:04.628637   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:31:04.628666   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem -> /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.628680   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.628692   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.629254   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:31:04.646879   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 18:31:04.663465   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:31:04.679837   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:31:04.695959   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 18:31:04.712689   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:31:04.729310   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:31:04.745587   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 18:31:04.761663   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem --> /usr/share/ca-certificates/12212.pem (1338 bytes)
	I1003 18:31:04.779546   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /usr/share/ca-certificates/122122.pem (1708 bytes)
	I1003 18:31:04.796119   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:31:04.813748   64909 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:31:04.826629   64909 ssh_runner.go:195] Run: openssl version
	I1003 18:31:04.832848   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122122.pem && ln -fs /usr/share/ca-certificates/122122.pem /etc/ssl/certs/122122.pem"
	I1003 18:31:04.840960   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.844465   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.844506   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.878276   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122122.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:31:04.886714   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:31:04.894672   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.898099   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.898154   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.931606   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:31:04.940357   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12212.pem && ln -fs /usr/share/ca-certificates/12212.pem /etc/ssl/certs/12212.pem"
	I1003 18:31:04.948454   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.952097   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.952148   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.985741   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12212.pem /etc/ssl/certs/51391683.0"
	I1003 18:31:04.994005   64909 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:31:04.997322   64909 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 18:31:04.997379   64909 kubeadm.go:400] StartCluster: {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:31:04.997476   64909 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:31:04.997539   64909 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:31:05.022530   64909 cri.go:89] found id: ""
	I1003 18:31:05.022595   64909 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:31:05.030329   64909 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 18:31:05.037782   64909 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:31:05.037841   64909 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:31:05.045127   64909 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:31:05.045142   64909 kubeadm.go:157] found existing configuration files:
	
	I1003 18:31:05.045174   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 18:31:05.052235   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:31:05.052286   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:31:05.059062   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 18:31:05.066034   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:31:05.066081   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:31:05.072912   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 18:31:05.079906   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:31:05.079966   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:31:05.086575   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 18:31:05.093500   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:31:05.093559   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:31:05.100246   64909 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:31:05.136174   64909 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:31:05.136254   64909 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:31:05.156320   64909 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:31:05.156407   64909 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 18:31:05.156462   64909 kubeadm.go:318] OS: Linux
	I1003 18:31:05.156539   64909 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:31:05.156610   64909 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:31:05.156705   64909 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:31:05.156790   64909 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:31:05.156865   64909 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:31:05.156939   64909 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:31:05.157035   64909 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:31:05.157127   64909 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 18:31:05.210250   64909 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:31:05.210408   64909 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:31:05.210566   64909 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:31:05.217643   64909 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:31:05.219725   64909 out.go:252]   - Generating certificates and keys ...
	I1003 18:31:05.219828   64909 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:31:05.219943   64909 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:31:05.398135   64909 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 18:31:05.511875   64909 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1003 18:31:05.863575   64909 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1003 18:31:06.044823   64909 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1003 18:31:06.083505   64909 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1003 18:31:06.083616   64909 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 18:31:06.181464   64909 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1003 18:31:06.181591   64909 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 18:31:06.345813   64909 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 18:31:06.565989   64909 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 18:31:06.759809   64909 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1003 18:31:06.759892   64909 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:31:06.883072   64909 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:31:07.211268   64909 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:31:07.403076   64909 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:31:07.687412   64909 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:31:08.052476   64909 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:31:08.052957   64909 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:31:08.054984   64909 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:31:08.056889   64909 out.go:252]   - Booting up control plane ...
	I1003 18:31:08.056984   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:31:08.057047   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:31:08.057102   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:31:08.069846   64909 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:31:08.069954   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:31:08.077490   64909 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:31:08.077826   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:31:08.077870   64909 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:31:08.170750   64909 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:31:08.170893   64909 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:31:09.172507   64909 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001794723s
	I1003 18:31:09.175233   64909 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:31:09.175335   64909 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1003 18:31:09.175418   64909 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:31:09.175496   64909 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:35:09.177158   64909 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001064557s
	I1003 18:35:09.177466   64909 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001283425s
	I1003 18:35:09.177673   64909 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00125879s
	I1003 18:35:09.177731   64909 kubeadm.go:318] 
	I1003 18:35:09.177887   64909 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 18:35:09.178114   64909 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:35:09.178320   64909 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 18:35:09.178580   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 18:35:09.178818   64909 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 18:35:09.179017   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 18:35:09.179033   64909 kubeadm.go:318] 
	I1003 18:35:09.182028   64909 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 18:35:09.182304   64909 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:35:09.182918   64909 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1003 18:35:09.183015   64909 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1003 18:35:09.183174   64909 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001794723s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001064557s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001283425s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00125879s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1003 18:35:09.183243   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1003 18:35:11.953646   64909 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.770379999s)
	I1003 18:35:11.953721   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:35:11.965876   64909 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:35:11.965928   64909 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:35:11.973363   64909 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:35:11.973382   64909 kubeadm.go:157] found existing configuration files:
	
	I1003 18:35:11.973419   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 18:35:11.980752   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:35:11.980806   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:35:11.987857   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 18:35:11.995081   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:35:11.995127   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:35:12.001778   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 18:35:12.009063   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:35:12.009126   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:35:12.015927   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 18:35:12.022875   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:35:12.022943   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:35:12.029549   64909 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:35:12.082477   64909 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 18:35:12.138594   64909 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:39:14.312592   64909 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	I1003 18:39:14.312818   64909 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1003 18:39:14.315914   64909 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:39:14.315992   64909 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:39:14.316115   64909 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:39:14.316166   64909 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 18:39:14.316250   64909 kubeadm.go:318] OS: Linux
	I1003 18:39:14.316328   64909 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:39:14.316401   64909 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:39:14.316475   64909 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:39:14.316553   64909 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:39:14.316624   64909 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:39:14.316701   64909 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:39:14.316751   64909 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:39:14.316825   64909 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 18:39:14.316936   64909 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:39:14.317123   64909 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:39:14.317262   64909 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:39:14.317314   64909 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:39:14.319872   64909 out.go:252]   - Generating certificates and keys ...
	I1003 18:39:14.319940   64909 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:39:14.320033   64909 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:39:14.320122   64909 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1003 18:39:14.320186   64909 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1003 18:39:14.320253   64909 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1003 18:39:14.320299   64909 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1003 18:39:14.320350   64909 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1003 18:39:14.320420   64909 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1003 18:39:14.320509   64909 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1003 18:39:14.320604   64909 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1003 18:39:14.320671   64909 kubeadm.go:318] [certs] Using the existing "sa" key
	I1003 18:39:14.320751   64909 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:39:14.320828   64909 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:39:14.320904   64909 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:39:14.321006   64909 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:39:14.321096   64909 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:39:14.321174   64909 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:39:14.321279   64909 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:39:14.321373   64909 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:39:14.322793   64909 out.go:252]   - Booting up control plane ...
	I1003 18:39:14.322884   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:39:14.323004   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:39:14.323072   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:39:14.323162   64909 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:39:14.323237   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:39:14.323335   64909 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:39:14.323415   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:39:14.323456   64909 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:39:14.323557   64909 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:39:14.323652   64909 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:39:14.323702   64909 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001540709s
	I1003 18:39:14.323792   64909 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:39:14.323860   64909 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1003 18:39:14.323946   64909 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:39:14.324043   64909 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:39:14.324124   64909 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	I1003 18:39:14.324186   64909 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	I1003 18:39:14.324248   64909 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	I1003 18:39:14.324258   64909 kubeadm.go:318] 
	I1003 18:39:14.324352   64909 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 18:39:14.324439   64909 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:39:14.324519   64909 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 18:39:14.324595   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 18:39:14.324687   64909 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 18:39:14.324773   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 18:39:14.324799   64909 kubeadm.go:318] 
	I1003 18:39:14.324836   64909 kubeadm.go:402] duration metric: took 8m9.327461574s to StartCluster
	I1003 18:39:14.324877   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:39:14.324935   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:39:14.352551   64909 cri.go:89] found id: ""
	I1003 18:39:14.352594   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.352608   64909 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:39:14.352617   64909 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:39:14.352684   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:39:14.376604   64909 cri.go:89] found id: ""
	I1003 18:39:14.376629   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.376638   64909 logs.go:284] No container was found matching "etcd"
	I1003 18:39:14.376643   64909 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:39:14.376750   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:39:14.401480   64909 cri.go:89] found id: ""
	I1003 18:39:14.401504   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.401512   64909 logs.go:284] No container was found matching "coredns"
	I1003 18:39:14.401517   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:39:14.401582   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:39:14.426822   64909 cri.go:89] found id: ""
	I1003 18:39:14.426858   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.426871   64909 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:39:14.426879   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:39:14.426946   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:39:14.451679   64909 cri.go:89] found id: ""
	I1003 18:39:14.451710   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.451722   64909 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:39:14.451730   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:39:14.451787   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:39:14.477253   64909 cri.go:89] found id: ""
	I1003 18:39:14.477275   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.477282   64909 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:39:14.477288   64909 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:39:14.477332   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:39:14.501586   64909 cri.go:89] found id: ""
	I1003 18:39:14.501613   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.501621   64909 logs.go:284] No container was found matching "kindnet"
	I1003 18:39:14.501632   64909 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:39:14.501643   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:39:14.561285   64909 logs.go:123] Gathering logs for container status ...
	I1003 18:39:14.561318   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:39:14.589589   64909 logs.go:123] Gathering logs for kubelet ...
	I1003 18:39:14.589614   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:39:14.656775   64909 logs.go:123] Gathering logs for dmesg ...
	I1003 18:39:14.656809   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:39:14.668000   64909 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:39:14.668023   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:39:14.725446   64909 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:39:14.718419    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.718941    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720510    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720909    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.722416    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:39:14.718419    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.718941    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720510    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720909    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.722416    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1003 18:39:14.725478   64909 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	W1003 18:39:14.725530   64909 out.go:285] * 
	W1003 18:39:14.725612   64909 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:39:14.725629   64909 out.go:285] * 
	W1003 18:39:14.727399   64909 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:39:14.731087   64909 out.go:203] 
	W1003 18:39:14.732560   64909 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:39:14.732585   64909 out.go:285] * 
	I1003 18:39:14.734183   64909 out.go:203] 
	
	
	==> CRI-O <==
	Oct 03 18:41:39 ha-422561 crio[781]: time="2025-10-03T18:41:39.920303306Z" level=info msg="createCtr: removing container b9dcbed371c452fd901d904df84936d58d7a64ad8fa10d3c0ca2cf8309edf6f5" id=adccfecc-8b61-4a2a-9782-64d132e5213e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:41:39 ha-422561 crio[781]: time="2025-10-03T18:41:39.920332238Z" level=info msg="createCtr: deleting container b9dcbed371c452fd901d904df84936d58d7a64ad8fa10d3c0ca2cf8309edf6f5 from storage" id=adccfecc-8b61-4a2a-9782-64d132e5213e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:41:39 ha-422561 crio[781]: time="2025-10-03T18:41:39.92261719Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-422561_kube-system_6803106e6cb30e1b9b282ce29772fddf_0" id=adccfecc-8b61-4a2a-9782-64d132e5213e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:41:41 ha-422561 crio[781]: time="2025-10-03T18:41:41.896344461Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=b6716162-f992-426c-9a3d-2775d0f53825 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:41:41 ha-422561 crio[781]: time="2025-10-03T18:41:41.897179697Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=13f4975b-33f9-42f8-917e-270a21e06f6e name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:41:41 ha-422561 crio[781]: time="2025-10-03T18:41:41.897957554Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-422561/kube-scheduler" id=0630bdbd-eb29-41ac-b7b6-55676e876c57 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:41:41 ha-422561 crio[781]: time="2025-10-03T18:41:41.898181074Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:41:41 ha-422561 crio[781]: time="2025-10-03T18:41:41.901439688Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:41:41 ha-422561 crio[781]: time="2025-10-03T18:41:41.901830821Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:41:41 ha-422561 crio[781]: time="2025-10-03T18:41:41.917687244Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=0630bdbd-eb29-41ac-b7b6-55676e876c57 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:41:41 ha-422561 crio[781]: time="2025-10-03T18:41:41.918951998Z" level=info msg="createCtr: deleting container ID 428ae4a3b575ecc67f14d8fd70e56657c0411ce6b100916089780cc91469e6b7 from idIndex" id=0630bdbd-eb29-41ac-b7b6-55676e876c57 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:41:41 ha-422561 crio[781]: time="2025-10-03T18:41:41.9189903Z" level=info msg="createCtr: removing container 428ae4a3b575ecc67f14d8fd70e56657c0411ce6b100916089780cc91469e6b7" id=0630bdbd-eb29-41ac-b7b6-55676e876c57 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:41:41 ha-422561 crio[781]: time="2025-10-03T18:41:41.919017817Z" level=info msg="createCtr: deleting container 428ae4a3b575ecc67f14d8fd70e56657c0411ce6b100916089780cc91469e6b7 from storage" id=0630bdbd-eb29-41ac-b7b6-55676e876c57 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:41:41 ha-422561 crio[781]: time="2025-10-03T18:41:41.920918695Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-422561_kube-system_2640157afe5e174d7402164688eed7be_0" id=0630bdbd-eb29-41ac-b7b6-55676e876c57 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:41:42 ha-422561 crio[781]: time="2025-10-03T18:41:42.896168028Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=fdabe4b1-fc7c-42ed-bf49-a6062d41f272 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:41:42 ha-422561 crio[781]: time="2025-10-03T18:41:42.896963402Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=45c8cced-efe8-4945-9558-17a32fcc9a8f name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:41:42 ha-422561 crio[781]: time="2025-10-03T18:41:42.897784438Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-422561/kube-controller-manager" id=391da848-83ac-4300-891a-ec45b6c5a1ba name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:41:42 ha-422561 crio[781]: time="2025-10-03T18:41:42.898063093Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:41:42 ha-422561 crio[781]: time="2025-10-03T18:41:42.901378323Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:41:42 ha-422561 crio[781]: time="2025-10-03T18:41:42.902006533Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:41:42 ha-422561 crio[781]: time="2025-10-03T18:41:42.920789343Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=391da848-83ac-4300-891a-ec45b6c5a1ba name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:41:42 ha-422561 crio[781]: time="2025-10-03T18:41:42.922029027Z" level=info msg="createCtr: deleting container ID 7b14ee996d31090b0cc4a9ff7f06622afb688a93082138f40ff6e5f9c39bd9ed from idIndex" id=391da848-83ac-4300-891a-ec45b6c5a1ba name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:41:42 ha-422561 crio[781]: time="2025-10-03T18:41:42.922057683Z" level=info msg="createCtr: removing container 7b14ee996d31090b0cc4a9ff7f06622afb688a93082138f40ff6e5f9c39bd9ed" id=391da848-83ac-4300-891a-ec45b6c5a1ba name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:41:42 ha-422561 crio[781]: time="2025-10-03T18:41:42.922083174Z" level=info msg="createCtr: deleting container 7b14ee996d31090b0cc4a9ff7f06622afb688a93082138f40ff6e5f9c39bd9ed from storage" id=391da848-83ac-4300-891a-ec45b6c5a1ba name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:41:42 ha-422561 crio[781]: time="2025-10-03T18:41:42.924243014Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-422561_kube-system_e643a03771f1e72f527532eff2c66a9c_0" id=391da848-83ac-4300-891a-ec45b6c5a1ba name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:41:44.705755    4607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:41:44.706293    4607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:41:44.707841    4607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:41:44.708319    4607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:41:44.709830    4607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 3 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001870] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.374530] i8042: Warning: Keylock active
	[  +0.010846] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003424] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000658] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000637] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000691] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.479345] block sda: the capability attribute has been deprecated.
	[  +0.086934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025583] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.992810] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:41:44 up  1:24,  0 user,  load average: 0.36, 0.18, 0.11
	Linux ha-422561 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 03 18:41:39 ha-422561 kubelet[1961]: E1003 18:41:39.923023    1961 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:41:39 ha-422561 kubelet[1961]:         container etcd start failed in pod etcd-ha-422561_kube-system(6803106e6cb30e1b9b282ce29772fddf): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:41:39 ha-422561 kubelet[1961]:  > logger="UnhandledError"
	Oct 03 18:41:39 ha-422561 kubelet[1961]: E1003 18:41:39.923057    1961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-422561" podUID="6803106e6cb30e1b9b282ce29772fddf"
	Oct 03 18:41:41 ha-422561 kubelet[1961]: E1003 18:41:41.895891    1961 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:41:41 ha-422561 kubelet[1961]: E1003 18:41:41.921196    1961 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:41:41 ha-422561 kubelet[1961]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:41:41 ha-422561 kubelet[1961]:  > podSandboxID="a10975bd62b256134c3b4cd528b6d141353311ccb4309c6a5b3dea224dc6ecb8"
	Oct 03 18:41:41 ha-422561 kubelet[1961]: E1003 18:41:41.921279    1961 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:41:41 ha-422561 kubelet[1961]:         container kube-scheduler start failed in pod kube-scheduler-ha-422561_kube-system(2640157afe5e174d7402164688eed7be): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:41:41 ha-422561 kubelet[1961]:  > logger="UnhandledError"
	Oct 03 18:41:41 ha-422561 kubelet[1961]: E1003 18:41:41.921307    1961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-422561" podUID="2640157afe5e174d7402164688eed7be"
	Oct 03 18:41:42 ha-422561 kubelet[1961]: E1003 18:41:42.895685    1961 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:41:42 ha-422561 kubelet[1961]: E1003 18:41:42.924500    1961 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:41:42 ha-422561 kubelet[1961]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:41:42 ha-422561 kubelet[1961]:  > podSandboxID="2bca45b92f4f55f540f80dd9d8d3d282362f7f0ecce2ac4786e27a3b4a9cfd4d"
	Oct 03 18:41:42 ha-422561 kubelet[1961]: E1003 18:41:42.924594    1961 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:41:42 ha-422561 kubelet[1961]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-422561_kube-system(e643a03771f1e72f527532eff2c66a9c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:41:42 ha-422561 kubelet[1961]:  > logger="UnhandledError"
	Oct 03 18:41:42 ha-422561 kubelet[1961]: E1003 18:41:42.924624    1961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-422561" podUID="e643a03771f1e72f527532eff2c66a9c"
	Oct 03 18:41:43 ha-422561 kubelet[1961]: E1003 18:41:43.354518    1961 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-422561.186b0ef272ca351c  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-422561,UID:ha-422561,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-422561 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-422561,},FirstTimestamp:2025-10-03 18:35:13.889039644 +0000 UTC m=+0.583846472,LastTimestamp:2025-10-03 18:35:13.889039644 +0000 UTC m=+0.583846472,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-422561,}"
	Oct 03 18:41:43 ha-422561 kubelet[1961]: E1003 18:41:43.919754    1961 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-422561\" not found"
	Oct 03 18:41:44 ha-422561 kubelet[1961]: E1003 18:41:44.543449    1961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-422561?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 03 18:41:44 ha-422561 kubelet[1961]: I1003 18:41:44.710430    1961 kubelet_node_status.go:75] "Attempting to register node" node="ha-422561"
	Oct 03 18:41:44 ha-422561 kubelet[1961]: E1003 18:41:44.710811    1961 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-422561"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561: exit status 6 (296.419464ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:41:45.077409   77169 status.go:458] kubeconfig endpoint: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-422561" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (43.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:305: expected profile "ha-422561" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-422561\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-422561\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nf
sshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-422561\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonIm
ages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:309: expected profile "ha-422561" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-422561\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-422561\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-422561\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\
"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --
output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-422561
helpers_test.go:243: (dbg) docker inspect ha-422561:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	        "Created": "2025-10-03T18:31:00.396132938Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 65481,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T18:31:00.428325646Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hostname",
	        "HostsPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hosts",
	        "LogPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512-json.log",
	        "Name": "/ha-422561",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-422561:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-422561",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	                "LowerDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94-init/diff:/var/lib/docker/overlay2/6a517a7375440eba803d7b83fe1e0821915758396dd4d8556ab64fff322a60c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-422561",
	                "Source": "/var/lib/docker/volumes/ha-422561/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-422561",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-422561",
	                "name.minikube.sigs.k8s.io": "ha-422561",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3084976d568ce061948ebe671f279a80502b1d28417f2be7c2497961eac2a5aa",
	            "SandboxKey": "/var/run/docker/netns/3084976d568c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-422561": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:e4:3c:eb:d3:38",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de6aa7ca29f453c0d15cb280abde7ee215f554c89e78e3db8a0f7590468114b5",
	                    "EndpointID": "1b961733d045b77a64efb8afa6caa273125f56ec888f823b790f5454f23ca3b7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-422561",
	                        "eef8fc426b2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561: exit status 6 (290.016278ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:41:45.695932   77413 status.go:458] kubeconfig endpoint: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-889240 image ls --format table --alsologtostderr                                                     │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ image   │ functional-889240 image build -t localhost/my-image:functional-889240 testdata/build --alsologtostderr          │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:27 UTC │
	│ image   │ functional-889240 image ls                                                                                      │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:27 UTC │ 03 Oct 25 18:27 UTC │
	│ delete  │ -p functional-889240                                                                                            │ functional-889240 │ jenkins │ v1.37.0 │ 03 Oct 25 18:30 UTC │ 03 Oct 25 18:30 UTC │
	│ start   │ ha-422561 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:30 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- rollout status deployment/busybox                                                          │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node add --alsologtostderr -v 5                                                                       │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node stop m02 --alsologtostderr -v 5                                                                  │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node start m02 --alsologtostderr -v 5                                                                 │ ha-422561         │ jenkins │ v1.37.0 │ 03 Oct 25 18:41 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:30:55
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:30:55.351405   64909 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:30:55.351662   64909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:55.351671   64909 out.go:374] Setting ErrFile to fd 2...
	I1003 18:30:55.351675   64909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:55.351854   64909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:30:55.352339   64909 out.go:368] Setting JSON to false
	I1003 18:30:55.353203   64909 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4406,"bootTime":1759511849,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:30:55.353289   64909 start.go:140] virtualization: kvm guest
	I1003 18:30:55.355458   64909 out.go:179] * [ha-422561] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 18:30:55.356815   64909 notify.go:220] Checking for updates...
	I1003 18:30:55.356884   64909 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:30:55.358389   64909 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:30:55.359964   64909 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:30:55.361351   64909 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:30:55.362647   64909 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:30:55.363956   64909 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:30:55.365351   64909 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:30:55.387768   64909 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:30:55.387885   64909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:30:55.443407   64909 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:30:55.433728571 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:30:55.443516   64909 docker.go:318] overlay module found
	I1003 18:30:55.445440   64909 out.go:179] * Using the docker driver based on user configuration
	I1003 18:30:55.446777   64909 start.go:304] selected driver: docker
	I1003 18:30:55.446793   64909 start.go:924] validating driver "docker" against <nil>
	I1003 18:30:55.446808   64909 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:30:55.447403   64909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:30:55.498777   64909 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:30:55.489521827 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:30:55.498958   64909 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1003 18:30:55.499206   64909 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:30:55.501187   64909 out.go:179] * Using Docker driver with root privileges
	I1003 18:30:55.502312   64909 cni.go:84] Creating CNI manager for ""
	I1003 18:30:55.502386   64909 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1003 18:30:55.502397   64909 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 18:30:55.502459   64909 start.go:348] cluster config:
	{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1003 18:30:55.503779   64909 out.go:179] * Starting "ha-422561" primary control-plane node in "ha-422561" cluster
	I1003 18:30:55.504816   64909 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:30:55.506028   64909 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:30:55.507131   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:30:55.507167   64909 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 18:30:55.507169   64909 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:30:55.507175   64909 cache.go:58] Caching tarball of preloaded images
	I1003 18:30:55.507294   64909 preload.go:233] Found /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1003 18:30:55.507311   64909 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 18:30:55.507736   64909 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:30:55.507764   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json: {Name:mk1ece959bac74a473416f0dfc8af04a6136d7b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:30:55.527458   64909 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 18:30:55.527478   64909 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 18:30:55.527494   64909 cache.go:232] Successfully downloaded all kic artifacts
	I1003 18:30:55.527527   64909 start.go:360] acquireMachinesLock for ha-422561: {Name:mk32fd04a5d9b5f89831583bab7d7527f4d187a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:30:55.527631   64909 start.go:364] duration metric: took 81.336µs to acquireMachinesLock for "ha-422561"
	I1003 18:30:55.527657   64909 start.go:93] Provisioning new machine with config: &{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 18:30:55.527748   64909 start.go:125] createHost starting for "" (driver="docker")
	I1003 18:30:55.529663   64909 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1003 18:30:55.529898   64909 start.go:159] libmachine.API.Create for "ha-422561" (driver="docker")
	I1003 18:30:55.529933   64909 client.go:168] LocalClient.Create starting
	I1003 18:30:55.530028   64909 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem
	I1003 18:30:55.530072   64909 main.go:141] libmachine: Decoding PEM data...
	I1003 18:30:55.530097   64909 main.go:141] libmachine: Parsing certificate...
	I1003 18:30:55.530187   64909 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem
	I1003 18:30:55.530226   64909 main.go:141] libmachine: Decoding PEM data...
	I1003 18:30:55.530238   64909 main.go:141] libmachine: Parsing certificate...
	I1003 18:30:55.530612   64909 cli_runner.go:164] Run: docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 18:30:55.547068   64909 cli_runner.go:211] docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 18:30:55.547129   64909 network_create.go:284] running [docker network inspect ha-422561] to gather additional debugging logs...
	I1003 18:30:55.547146   64909 cli_runner.go:164] Run: docker network inspect ha-422561
	W1003 18:30:55.563141   64909 cli_runner.go:211] docker network inspect ha-422561 returned with exit code 1
	I1003 18:30:55.563167   64909 network_create.go:287] error running [docker network inspect ha-422561]: docker network inspect ha-422561: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-422561 not found
	I1003 18:30:55.563179   64909 network_create.go:289] output of [docker network inspect ha-422561]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-422561 not found
	
	** /stderr **
	I1003 18:30:55.563276   64909 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:30:55.579301   64909 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00157b3a0}
	I1003 18:30:55.579336   64909 network_create.go:124] attempt to create docker network ha-422561 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1003 18:30:55.579388   64909 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-422561 ha-422561
	I1003 18:30:55.634233   64909 network_create.go:108] docker network ha-422561 192.168.49.0/24 created
	I1003 18:30:55.634260   64909 kic.go:121] calculated static IP "192.168.49.2" for the "ha-422561" container
	I1003 18:30:55.634318   64909 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 18:30:55.649960   64909 cli_runner.go:164] Run: docker volume create ha-422561 --label name.minikube.sigs.k8s.io=ha-422561 --label created_by.minikube.sigs.k8s.io=true
	I1003 18:30:55.667186   64909 oci.go:103] Successfully created a docker volume ha-422561
	I1003 18:30:55.667250   64909 cli_runner.go:164] Run: docker run --rm --name ha-422561-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-422561 --entrypoint /usr/bin/test -v ha-422561:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1003 18:30:56.041615   64909 oci.go:107] Successfully prepared a docker volume ha-422561
	I1003 18:30:56.041648   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:30:56.041669   64909 kic.go:194] Starting extracting preloaded images to volume ...
	I1003 18:30:56.041727   64909 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-422561:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1003 18:31:00.326417   64909 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-422561:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.284654466s)
	I1003 18:31:00.326457   64909 kic.go:203] duration metric: took 4.284784967s to extract preloaded images to volume ...
	W1003 18:31:00.326567   64909 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1003 18:31:00.326610   64909 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1003 18:31:00.326657   64909 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1003 18:31:00.381592   64909 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-422561 --name ha-422561 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-422561 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-422561 --network ha-422561 --ip 192.168.49.2 --volume ha-422561:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1003 18:31:00.641348   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Running}}
	I1003 18:31:00.659876   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:00.678319   64909 cli_runner.go:164] Run: docker exec ha-422561 stat /var/lib/dpkg/alternatives/iptables
	I1003 18:31:00.728414   64909 oci.go:144] the created container "ha-422561" has a running status.
	I1003 18:31:00.728450   64909 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa...
	I1003 18:31:01.103610   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1003 18:31:01.103663   64909 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1003 18:31:01.128670   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:01.147200   64909 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1003 18:31:01.147218   64909 kic_runner.go:114] Args: [docker exec --privileged ha-422561 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1003 18:31:01.189023   64909 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:31:01.207395   64909 machine.go:93] provisionDockerMachine start ...
	I1003 18:31:01.207497   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.226029   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.226282   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.226299   64909 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 18:31:01.372245   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:31:01.372275   64909 ubuntu.go:182] provisioning hostname "ha-422561"
	I1003 18:31:01.372335   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.390674   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.390889   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.390902   64909 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-422561 && echo "ha-422561" | sudo tee /etc/hostname
	I1003 18:31:01.544850   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:31:01.544932   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.563695   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:01.563966   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:01.564014   64909 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422561' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422561/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422561' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:31:01.708942   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:31:01.708971   64909 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8669/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8669/.minikube}
	I1003 18:31:01.709036   64909 ubuntu.go:190] setting up certificates
	I1003 18:31:01.709048   64909 provision.go:84] configureAuth start
	I1003 18:31:01.709101   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:01.727778   64909 provision.go:143] copyHostCerts
	I1003 18:31:01.727814   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:31:01.727849   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem, removing ...
	I1003 18:31:01.727858   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:31:01.727940   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem (1082 bytes)
	I1003 18:31:01.728054   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:31:01.728079   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem, removing ...
	I1003 18:31:01.728090   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:31:01.728137   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem (1123 bytes)
	I1003 18:31:01.728200   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:31:01.728225   64909 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem, removing ...
	I1003 18:31:01.728234   64909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:31:01.728266   64909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem (1675 bytes)
	I1003 18:31:01.728336   64909 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem org=jenkins.ha-422561 san=[127.0.0.1 192.168.49.2 ha-422561 localhost minikube]
	I1003 18:31:01.864219   64909 provision.go:177] copyRemoteCerts
	I1003 18:31:01.864281   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:31:01.864317   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:01.882069   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:01.982800   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 18:31:01.982877   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1003 18:31:02.000887   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 18:31:02.000952   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 18:31:02.017591   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 18:31:02.017639   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 18:31:02.034172   64909 provision.go:87] duration metric: took 325.10989ms to configureAuth
	I1003 18:31:02.034202   64909 ubuntu.go:206] setting minikube options for container-runtime
	I1003 18:31:02.034393   64909 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:31:02.034508   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.052111   64909 main.go:141] libmachine: Using SSH client type: native
	I1003 18:31:02.052326   64909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1003 18:31:02.052344   64909 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 18:31:02.295594   64909 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 18:31:02.295629   64909 machine.go:96] duration metric: took 1.088207423s to provisionDockerMachine
	I1003 18:31:02.295640   64909 client.go:171] duration metric: took 6.765697238s to LocalClient.Create
	I1003 18:31:02.295660   64909 start.go:167] duration metric: took 6.765761646s to libmachine.API.Create "ha-422561"
	I1003 18:31:02.295669   64909 start.go:293] postStartSetup for "ha-422561" (driver="docker")
	I1003 18:31:02.295682   64909 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:31:02.295752   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:31:02.295789   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.312783   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.414720   64909 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:31:02.418127   64909 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:31:02.418149   64909 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 18:31:02.418159   64909 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/addons for local assets ...
	I1003 18:31:02.418213   64909 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/files for local assets ...
	I1003 18:31:02.418310   64909 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> 122122.pem in /etc/ssl/certs
	I1003 18:31:02.418326   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /etc/ssl/certs/122122.pem
	I1003 18:31:02.418453   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 18:31:02.425623   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:31:02.444405   64909 start.go:296] duration metric: took 148.722871ms for postStartSetup
	I1003 18:31:02.444748   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:02.462226   64909 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:31:02.462456   64909 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:31:02.462495   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.478737   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.575846   64909 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:31:02.580138   64909 start.go:128] duration metric: took 7.052376255s to createHost
	I1003 18:31:02.580160   64909 start.go:83] releasing machines lock for "ha-422561", held for 7.052515614s
	I1003 18:31:02.580230   64909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:31:02.596730   64909 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:31:02.596776   64909 ssh_runner.go:195] Run: cat /version.json
	I1003 18:31:02.596798   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.596817   64909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:31:02.613783   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.614183   64909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:31:02.764865   64909 ssh_runner.go:195] Run: systemctl --version
	I1003 18:31:02.771251   64909 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 18:31:02.803643   64909 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 18:31:02.807949   64909 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 18:31:02.808044   64909 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 18:31:02.833024   64909 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 18:31:02.833043   64909 start.go:495] detecting cgroup driver to use...
	I1003 18:31:02.833073   64909 detect.go:190] detected "systemd" cgroup driver on host os
	I1003 18:31:02.833108   64909 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 18:31:02.847613   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 18:31:02.858865   64909 docker.go:218] disabling cri-docker service (if available) ...
	I1003 18:31:02.858910   64909 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 18:31:02.874470   64909 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 18:31:02.890554   64909 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 18:31:02.970342   64909 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 18:31:03.055310   64909 docker.go:234] disabling docker service ...
	I1003 18:31:03.055369   64909 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 18:31:03.072668   64909 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 18:31:03.084308   64909 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 18:31:03.163959   64909 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 18:31:03.241930   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 18:31:03.253863   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:31:03.266905   64909 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 18:31:03.266971   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.276795   64909 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 18:31:03.276848   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.285157   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.293117   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.301070   64909 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:31:03.308489   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.316789   64909 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.329424   64909 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:31:03.337651   64909 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:31:03.344839   64909 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:31:03.352026   64909 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:31:03.430894   64909 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 18:31:03.533915   64909 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 18:31:03.534002   64909 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 18:31:03.537783   64909 start.go:563] Will wait 60s for crictl version
	I1003 18:31:03.537838   64909 ssh_runner.go:195] Run: which crictl
	I1003 18:31:03.541393   64909 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 18:31:03.564883   64909 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 18:31:03.564963   64909 ssh_runner.go:195] Run: crio --version
	I1003 18:31:03.591363   64909 ssh_runner.go:195] Run: crio --version
	I1003 18:31:03.619425   64909 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 18:31:03.620466   64909 cli_runner.go:164] Run: docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:31:03.637151   64909 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 18:31:03.641184   64909 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:31:03.651292   64909 kubeadm.go:883] updating cluster {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 18:31:03.651379   64909 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:31:03.651428   64909 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:31:03.680883   64909 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:31:03.680904   64909 crio.go:433] Images already preloaded, skipping extraction
	I1003 18:31:03.680955   64909 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:31:03.706829   64909 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:31:03.706859   64909 cache_images.go:85] Images are preloaded, skipping loading
	I1003 18:31:03.706866   64909 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1003 18:31:03.706953   64909 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422561 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 18:31:03.707032   64909 ssh_runner.go:195] Run: crio config
	I1003 18:31:03.751501   64909 cni.go:84] Creating CNI manager for ""
	I1003 18:31:03.751523   64909 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 18:31:03.751538   64909 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 18:31:03.751558   64909 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-422561 NodeName:ha-422561 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 18:31:03.751669   64909 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-422561"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:31:03.751691   64909 kube-vip.go:115] generating kube-vip config ...
	I1003 18:31:03.751728   64909 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1003 18:31:03.763009   64909 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:31:03.763125   64909 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1003 18:31:03.763181   64909 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 18:31:03.770585   64909 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:31:03.770633   64909 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1003 18:31:03.778069   64909 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1003 18:31:03.790397   64909 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 18:31:03.805112   64909 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1003 18:31:03.817362   64909 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1003 18:31:03.830824   64909 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1003 18:31:03.834300   64909 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:31:03.843861   64909 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:31:03.921407   64909 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:31:03.944431   64909 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561 for IP: 192.168.49.2
	I1003 18:31:03.944451   64909 certs.go:195] generating shared ca certs ...
	I1003 18:31:03.944468   64909 certs.go:227] acquiring lock for ca certs: {Name:mk92d1e8e469cb44d9924ff8abf5ecf0a8ce4e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:03.944607   64909 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key
	I1003 18:31:03.944644   64909 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key
	I1003 18:31:03.944652   64909 certs.go:257] generating profile certs ...
	I1003 18:31:03.944708   64909 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key
	I1003 18:31:03.944722   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt with IP's: []
	I1003 18:31:04.171087   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt ...
	I1003 18:31:04.171118   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt: {Name:mked6cb0f731cbb630d2b187c4975015a458a284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.171291   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key ...
	I1003 18:31:04.171301   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key: {Name:mk0c9f0a0941d99f2af213cd316467f053532c99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.171391   64909 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905
	I1003 18:31:04.171406   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1003 18:31:04.383185   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 ...
	I1003 18:31:04.383218   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905: {Name:mkc24c55d4abb428b3559a93e6e301be2cab703a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.383381   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905 ...
	I1003 18:31:04.383394   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905: {Name:mk0576a73623089a3eecf4e34bbbd214545e2247 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.383486   64909 certs.go:382] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2bd5c905 -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt
	I1003 18:31:04.383601   64909 certs.go:386] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2bd5c905 -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key
	I1003 18:31:04.383674   64909 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key
	I1003 18:31:04.383689   64909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt with IP's: []
	I1003 18:31:04.628083   64909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt ...
	I1003 18:31:04.628112   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt: {Name:mkc19179c67a2559968759165df93d304eb42db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.628269   64909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key ...
	I1003 18:31:04.628279   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key: {Name:mka8b2392a3d721a70329b852837f3403643f948 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:31:04.628347   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 18:31:04.628364   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 18:31:04.628375   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 18:31:04.628384   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 18:31:04.628397   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 18:31:04.628410   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 18:31:04.628430   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 18:31:04.628442   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 18:31:04.628492   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem (1338 bytes)
	W1003 18:31:04.628525   64909 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212_empty.pem, impossibly tiny 0 bytes
	I1003 18:31:04.628535   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:31:04.628558   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:31:04.628580   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:31:04.628601   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem (1675 bytes)
	I1003 18:31:04.628637   64909 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:31:04.628666   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem -> /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.628680   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.628692   64909 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.629254   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:31:04.646879   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 18:31:04.663465   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:31:04.679837   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:31:04.695959   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 18:31:04.712689   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:31:04.729310   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:31:04.745587   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 18:31:04.761663   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem --> /usr/share/ca-certificates/12212.pem (1338 bytes)
	I1003 18:31:04.779546   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /usr/share/ca-certificates/122122.pem (1708 bytes)
	I1003 18:31:04.796119   64909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:31:04.813748   64909 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:31:04.826629   64909 ssh_runner.go:195] Run: openssl version
	I1003 18:31:04.832848   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122122.pem && ln -fs /usr/share/ca-certificates/122122.pem /etc/ssl/certs/122122.pem"
	I1003 18:31:04.840960   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.844465   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.844506   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122122.pem
	I1003 18:31:04.878276   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122122.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:31:04.886714   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:31:04.894672   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.898099   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.898154   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:31:04.931606   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:31:04.940357   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12212.pem && ln -fs /usr/share/ca-certificates/12212.pem /etc/ssl/certs/12212.pem"
	I1003 18:31:04.948454   64909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.952097   64909 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.952148   64909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12212.pem
	I1003 18:31:04.985741   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12212.pem /etc/ssl/certs/51391683.0"
	I1003 18:31:04.994005   64909 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:31:04.997322   64909 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 18:31:04.997379   64909 kubeadm.go:400] StartCluster: {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:31:04.997476   64909 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:31:04.997539   64909 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:31:05.022530   64909 cri.go:89] found id: ""
	I1003 18:31:05.022595   64909 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:31:05.030329   64909 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 18:31:05.037782   64909 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:31:05.037841   64909 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:31:05.045127   64909 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:31:05.045142   64909 kubeadm.go:157] found existing configuration files:
	
	I1003 18:31:05.045174   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 18:31:05.052235   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:31:05.052286   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:31:05.059062   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 18:31:05.066034   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:31:05.066081   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:31:05.072912   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 18:31:05.079906   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:31:05.079966   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:31:05.086575   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 18:31:05.093500   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:31:05.093559   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:31:05.100246   64909 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:31:05.136174   64909 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:31:05.136254   64909 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:31:05.156320   64909 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:31:05.156407   64909 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 18:31:05.156462   64909 kubeadm.go:318] OS: Linux
	I1003 18:31:05.156539   64909 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:31:05.156610   64909 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:31:05.156705   64909 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:31:05.156790   64909 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:31:05.156865   64909 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:31:05.156939   64909 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:31:05.157035   64909 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:31:05.157127   64909 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 18:31:05.210250   64909 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:31:05.210408   64909 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:31:05.210566   64909 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:31:05.217643   64909 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:31:05.219725   64909 out.go:252]   - Generating certificates and keys ...
	I1003 18:31:05.219828   64909 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:31:05.219943   64909 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:31:05.398135   64909 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 18:31:05.511875   64909 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1003 18:31:05.863575   64909 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1003 18:31:06.044823   64909 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1003 18:31:06.083505   64909 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1003 18:31:06.083616   64909 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 18:31:06.181464   64909 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1003 18:31:06.181591   64909 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 18:31:06.345813   64909 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 18:31:06.565989   64909 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 18:31:06.759809   64909 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1003 18:31:06.759892   64909 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:31:06.883072   64909 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:31:07.211268   64909 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:31:07.403076   64909 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:31:07.687412   64909 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:31:08.052476   64909 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:31:08.052957   64909 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:31:08.054984   64909 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:31:08.056889   64909 out.go:252]   - Booting up control plane ...
	I1003 18:31:08.056984   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:31:08.057047   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:31:08.057102   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:31:08.069846   64909 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:31:08.069954   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:31:08.077490   64909 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:31:08.077826   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:31:08.077870   64909 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:31:08.170750   64909 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:31:08.170893   64909 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:31:09.172507   64909 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001794723s
	I1003 18:31:09.175233   64909 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:31:09.175335   64909 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1003 18:31:09.175418   64909 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:31:09.175496   64909 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:35:09.177158   64909 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001064557s
	I1003 18:35:09.177466   64909 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001283425s
	I1003 18:35:09.177673   64909 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00125879s
	I1003 18:35:09.177731   64909 kubeadm.go:318] 
	I1003 18:35:09.177887   64909 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 18:35:09.178114   64909 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:35:09.178320   64909 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 18:35:09.178580   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 18:35:09.178818   64909 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 18:35:09.179017   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 18:35:09.179033   64909 kubeadm.go:318] 
	I1003 18:35:09.182028   64909 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 18:35:09.182304   64909 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:35:09.182918   64909 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1003 18:35:09.183015   64909 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1003 18:35:09.183174   64909 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-422561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001794723s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001064557s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001283425s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00125879s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1003 18:35:09.183243   64909 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1003 18:35:11.953646   64909 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.770379999s)
	I1003 18:35:11.953721   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:35:11.965876   64909 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:35:11.965928   64909 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:35:11.973363   64909 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:35:11.973382   64909 kubeadm.go:157] found existing configuration files:
	
	I1003 18:35:11.973419   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 18:35:11.980752   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:35:11.980806   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:35:11.987857   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 18:35:11.995081   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:35:11.995127   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:35:12.001778   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 18:35:12.009063   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:35:12.009126   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:35:12.015927   64909 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 18:35:12.022875   64909 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:35:12.022943   64909 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:35:12.029549   64909 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:35:12.082477   64909 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 18:35:12.138594   64909 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:39:14.312592   64909 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	I1003 18:39:14.312818   64909 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1003 18:39:14.315914   64909 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:39:14.315992   64909 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:39:14.316115   64909 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:39:14.316166   64909 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 18:39:14.316250   64909 kubeadm.go:318] OS: Linux
	I1003 18:39:14.316328   64909 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:39:14.316401   64909 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:39:14.316475   64909 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:39:14.316553   64909 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:39:14.316624   64909 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:39:14.316701   64909 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:39:14.316751   64909 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:39:14.316825   64909 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 18:39:14.316936   64909 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:39:14.317123   64909 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:39:14.317262   64909 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:39:14.317314   64909 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:39:14.319872   64909 out.go:252]   - Generating certificates and keys ...
	I1003 18:39:14.319940   64909 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:39:14.320033   64909 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:39:14.320122   64909 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1003 18:39:14.320186   64909 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1003 18:39:14.320253   64909 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1003 18:39:14.320299   64909 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1003 18:39:14.320350   64909 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1003 18:39:14.320420   64909 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1003 18:39:14.320509   64909 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1003 18:39:14.320604   64909 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1003 18:39:14.320671   64909 kubeadm.go:318] [certs] Using the existing "sa" key
	I1003 18:39:14.320751   64909 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:39:14.320828   64909 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:39:14.320904   64909 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:39:14.321006   64909 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:39:14.321096   64909 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:39:14.321174   64909 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:39:14.321279   64909 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:39:14.321373   64909 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:39:14.322793   64909 out.go:252]   - Booting up control plane ...
	I1003 18:39:14.322884   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:39:14.323004   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:39:14.323072   64909 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:39:14.323162   64909 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:39:14.323237   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:39:14.323335   64909 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:39:14.323415   64909 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:39:14.323456   64909 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:39:14.323557   64909 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:39:14.323652   64909 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:39:14.323702   64909 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001540709s
	I1003 18:39:14.323792   64909 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:39:14.323860   64909 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1003 18:39:14.323946   64909 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:39:14.324043   64909 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:39:14.324124   64909 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	I1003 18:39:14.324186   64909 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	I1003 18:39:14.324248   64909 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	I1003 18:39:14.324258   64909 kubeadm.go:318] 
	I1003 18:39:14.324352   64909 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 18:39:14.324439   64909 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:39:14.324519   64909 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 18:39:14.324595   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 18:39:14.324687   64909 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 18:39:14.324773   64909 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 18:39:14.324799   64909 kubeadm.go:318] 
	I1003 18:39:14.324836   64909 kubeadm.go:402] duration metric: took 8m9.327461574s to StartCluster
	I1003 18:39:14.324877   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 18:39:14.324935   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 18:39:14.352551   64909 cri.go:89] found id: ""
	I1003 18:39:14.352594   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.352608   64909 logs.go:284] No container was found matching "kube-apiserver"
	I1003 18:39:14.352617   64909 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 18:39:14.352684   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 18:39:14.376604   64909 cri.go:89] found id: ""
	I1003 18:39:14.376629   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.376638   64909 logs.go:284] No container was found matching "etcd"
	I1003 18:39:14.376643   64909 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 18:39:14.376750   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 18:39:14.401480   64909 cri.go:89] found id: ""
	I1003 18:39:14.401504   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.401512   64909 logs.go:284] No container was found matching "coredns"
	I1003 18:39:14.401517   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 18:39:14.401582   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 18:39:14.426822   64909 cri.go:89] found id: ""
	I1003 18:39:14.426858   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.426871   64909 logs.go:284] No container was found matching "kube-scheduler"
	I1003 18:39:14.426879   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 18:39:14.426946   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 18:39:14.451679   64909 cri.go:89] found id: ""
	I1003 18:39:14.451710   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.451722   64909 logs.go:284] No container was found matching "kube-proxy"
	I1003 18:39:14.451730   64909 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 18:39:14.451787   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 18:39:14.477253   64909 cri.go:89] found id: ""
	I1003 18:39:14.477275   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.477282   64909 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 18:39:14.477288   64909 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 18:39:14.477332   64909 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 18:39:14.501586   64909 cri.go:89] found id: ""
	I1003 18:39:14.501613   64909 logs.go:282] 0 containers: []
	W1003 18:39:14.501621   64909 logs.go:284] No container was found matching "kindnet"
	I1003 18:39:14.501632   64909 logs.go:123] Gathering logs for CRI-O ...
	I1003 18:39:14.501643   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 18:39:14.561285   64909 logs.go:123] Gathering logs for container status ...
	I1003 18:39:14.561318   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 18:39:14.589589   64909 logs.go:123] Gathering logs for kubelet ...
	I1003 18:39:14.589614   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:39:14.656775   64909 logs.go:123] Gathering logs for dmesg ...
	I1003 18:39:14.656809   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:39:14.668000   64909 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:39:14.668023   64909 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:39:14.725446   64909 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:39:14.718419    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.718941    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720510    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720909    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.722416    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 18:39:14.718419    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.718941    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720510    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.720909    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:39:14.722416    2584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1003 18:39:14.725478   64909 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	W1003 18:39:14.725530   64909 out.go:285] * 
	W1003 18:39:14.725612   64909 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:39:14.725629   64909 out.go:285] * 
	W1003 18:39:14.727399   64909 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:39:14.731087   64909 out.go:203] 
	W1003 18:39:14.732560   64909 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001540709s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000854978s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000930119s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001033396s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:39:14.732585   64909 out.go:285] * 
	I1003 18:39:14.734183   64909 out.go:203] 
	
	
	==> CRI-O <==
	Oct 03 18:41:39 ha-422561 crio[781]: time="2025-10-03T18:41:39.920303306Z" level=info msg="createCtr: removing container b9dcbed371c452fd901d904df84936d58d7a64ad8fa10d3c0ca2cf8309edf6f5" id=adccfecc-8b61-4a2a-9782-64d132e5213e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:41:39 ha-422561 crio[781]: time="2025-10-03T18:41:39.920332238Z" level=info msg="createCtr: deleting container b9dcbed371c452fd901d904df84936d58d7a64ad8fa10d3c0ca2cf8309edf6f5 from storage" id=adccfecc-8b61-4a2a-9782-64d132e5213e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:41:39 ha-422561 crio[781]: time="2025-10-03T18:41:39.92261719Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-422561_kube-system_6803106e6cb30e1b9b282ce29772fddf_0" id=adccfecc-8b61-4a2a-9782-64d132e5213e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:41:41 ha-422561 crio[781]: time="2025-10-03T18:41:41.896344461Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=b6716162-f992-426c-9a3d-2775d0f53825 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:41:41 ha-422561 crio[781]: time="2025-10-03T18:41:41.897179697Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=13f4975b-33f9-42f8-917e-270a21e06f6e name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:41:41 ha-422561 crio[781]: time="2025-10-03T18:41:41.897957554Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-422561/kube-scheduler" id=0630bdbd-eb29-41ac-b7b6-55676e876c57 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:41:41 ha-422561 crio[781]: time="2025-10-03T18:41:41.898181074Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:41:41 ha-422561 crio[781]: time="2025-10-03T18:41:41.901439688Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:41:41 ha-422561 crio[781]: time="2025-10-03T18:41:41.901830821Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:41:41 ha-422561 crio[781]: time="2025-10-03T18:41:41.917687244Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=0630bdbd-eb29-41ac-b7b6-55676e876c57 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:41:41 ha-422561 crio[781]: time="2025-10-03T18:41:41.918951998Z" level=info msg="createCtr: deleting container ID 428ae4a3b575ecc67f14d8fd70e56657c0411ce6b100916089780cc91469e6b7 from idIndex" id=0630bdbd-eb29-41ac-b7b6-55676e876c57 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:41:41 ha-422561 crio[781]: time="2025-10-03T18:41:41.9189903Z" level=info msg="createCtr: removing container 428ae4a3b575ecc67f14d8fd70e56657c0411ce6b100916089780cc91469e6b7" id=0630bdbd-eb29-41ac-b7b6-55676e876c57 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:41:41 ha-422561 crio[781]: time="2025-10-03T18:41:41.919017817Z" level=info msg="createCtr: deleting container 428ae4a3b575ecc67f14d8fd70e56657c0411ce6b100916089780cc91469e6b7 from storage" id=0630bdbd-eb29-41ac-b7b6-55676e876c57 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:41:41 ha-422561 crio[781]: time="2025-10-03T18:41:41.920918695Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-422561_kube-system_2640157afe5e174d7402164688eed7be_0" id=0630bdbd-eb29-41ac-b7b6-55676e876c57 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:41:42 ha-422561 crio[781]: time="2025-10-03T18:41:42.896168028Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=fdabe4b1-fc7c-42ed-bf49-a6062d41f272 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:41:42 ha-422561 crio[781]: time="2025-10-03T18:41:42.896963402Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=45c8cced-efe8-4945-9558-17a32fcc9a8f name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:41:42 ha-422561 crio[781]: time="2025-10-03T18:41:42.897784438Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-422561/kube-controller-manager" id=391da848-83ac-4300-891a-ec45b6c5a1ba name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:41:42 ha-422561 crio[781]: time="2025-10-03T18:41:42.898063093Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:41:42 ha-422561 crio[781]: time="2025-10-03T18:41:42.901378323Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:41:42 ha-422561 crio[781]: time="2025-10-03T18:41:42.902006533Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:41:42 ha-422561 crio[781]: time="2025-10-03T18:41:42.920789343Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=391da848-83ac-4300-891a-ec45b6c5a1ba name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:41:42 ha-422561 crio[781]: time="2025-10-03T18:41:42.922029027Z" level=info msg="createCtr: deleting container ID 7b14ee996d31090b0cc4a9ff7f06622afb688a93082138f40ff6e5f9c39bd9ed from idIndex" id=391da848-83ac-4300-891a-ec45b6c5a1ba name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:41:42 ha-422561 crio[781]: time="2025-10-03T18:41:42.922057683Z" level=info msg="createCtr: removing container 7b14ee996d31090b0cc4a9ff7f06622afb688a93082138f40ff6e5f9c39bd9ed" id=391da848-83ac-4300-891a-ec45b6c5a1ba name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:41:42 ha-422561 crio[781]: time="2025-10-03T18:41:42.922083174Z" level=info msg="createCtr: deleting container 7b14ee996d31090b0cc4a9ff7f06622afb688a93082138f40ff6e5f9c39bd9ed from storage" id=391da848-83ac-4300-891a-ec45b6c5a1ba name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:41:42 ha-422561 crio[781]: time="2025-10-03T18:41:42.924243014Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-422561_kube-system_e643a03771f1e72f527532eff2c66a9c_0" id=391da848-83ac-4300-891a-ec45b6c5a1ba name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:41:46.253004    4785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:41:46.253516    4785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:41:46.255030    4785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:41:46.255419    4785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:41:46.256924    4785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 3 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001870] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.374530] i8042: Warning: Keylock active
	[  +0.010846] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003424] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000658] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000637] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000691] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.479345] block sda: the capability attribute has been deprecated.
	[  +0.086934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025583] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.992810] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:41:46 up  1:24,  0 user,  load average: 0.41, 0.19, 0.11
	Linux ha-422561 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 03 18:41:39 ha-422561 kubelet[1961]: E1003 18:41:39.923023    1961 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:41:39 ha-422561 kubelet[1961]:         container etcd start failed in pod etcd-ha-422561_kube-system(6803106e6cb30e1b9b282ce29772fddf): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:41:39 ha-422561 kubelet[1961]:  > logger="UnhandledError"
	Oct 03 18:41:39 ha-422561 kubelet[1961]: E1003 18:41:39.923057    1961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-422561" podUID="6803106e6cb30e1b9b282ce29772fddf"
	Oct 03 18:41:41 ha-422561 kubelet[1961]: E1003 18:41:41.895891    1961 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:41:41 ha-422561 kubelet[1961]: E1003 18:41:41.921196    1961 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:41:41 ha-422561 kubelet[1961]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:41:41 ha-422561 kubelet[1961]:  > podSandboxID="a10975bd62b256134c3b4cd528b6d141353311ccb4309c6a5b3dea224dc6ecb8"
	Oct 03 18:41:41 ha-422561 kubelet[1961]: E1003 18:41:41.921279    1961 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:41:41 ha-422561 kubelet[1961]:         container kube-scheduler start failed in pod kube-scheduler-ha-422561_kube-system(2640157afe5e174d7402164688eed7be): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:41:41 ha-422561 kubelet[1961]:  > logger="UnhandledError"
	Oct 03 18:41:41 ha-422561 kubelet[1961]: E1003 18:41:41.921307    1961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-422561" podUID="2640157afe5e174d7402164688eed7be"
	Oct 03 18:41:42 ha-422561 kubelet[1961]: E1003 18:41:42.895685    1961 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:41:42 ha-422561 kubelet[1961]: E1003 18:41:42.924500    1961 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:41:42 ha-422561 kubelet[1961]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:41:42 ha-422561 kubelet[1961]:  > podSandboxID="2bca45b92f4f55f540f80dd9d8d3d282362f7f0ecce2ac4786e27a3b4a9cfd4d"
	Oct 03 18:41:42 ha-422561 kubelet[1961]: E1003 18:41:42.924594    1961 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:41:42 ha-422561 kubelet[1961]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-422561_kube-system(e643a03771f1e72f527532eff2c66a9c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:41:42 ha-422561 kubelet[1961]:  > logger="UnhandledError"
	Oct 03 18:41:42 ha-422561 kubelet[1961]: E1003 18:41:42.924624    1961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-422561" podUID="e643a03771f1e72f527532eff2c66a9c"
	Oct 03 18:41:43 ha-422561 kubelet[1961]: E1003 18:41:43.354518    1961 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-422561.186b0ef272ca351c  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-422561,UID:ha-422561,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-422561 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-422561,},FirstTimestamp:2025-10-03 18:35:13.889039644 +0000 UTC m=+0.583846472,LastTimestamp:2025-10-03 18:35:13.889039644 +0000 UTC m=+0.583846472,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-422561,}"
	Oct 03 18:41:43 ha-422561 kubelet[1961]: E1003 18:41:43.919754    1961 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-422561\" not found"
	Oct 03 18:41:44 ha-422561 kubelet[1961]: E1003 18:41:44.543449    1961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-422561?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 03 18:41:44 ha-422561 kubelet[1961]: I1003 18:41:44.710430    1961 kubelet_node_status.go:75] "Attempting to register node" node="ha-422561"
	Oct 03 18:41:44 ha-422561 kubelet[1961]: E1003 18:41:44.710811    1961 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-422561"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561: exit status 6 (292.733716ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:41:46.621756   77747 status.go:458] kubeconfig endpoint: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-422561" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (369.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-422561 stop --alsologtostderr -v 5: (1.222231261s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 start --wait true --alsologtostderr -v 5
E1003 18:41:51.829828   12212 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:46:51.831242   12212 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 start --wait true --alsologtostderr -v 5: exit status 80 (6m7.337508152s)

                                                
                                                
-- stdout --
	* [ha-422561] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21625
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-422561" primary control-plane node in "ha-422561" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:41:47.965617   78109 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:41:47.965729   78109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:41:47.965734   78109 out.go:374] Setting ErrFile to fd 2...
	I1003 18:41:47.965738   78109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:41:47.965965   78109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:41:47.966407   78109 out.go:368] Setting JSON to false
	I1003 18:41:47.967236   78109 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5059,"bootTime":1759511849,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:41:47.967316   78109 start.go:140] virtualization: kvm guest
	I1003 18:41:47.969565   78109 out.go:179] * [ha-422561] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 18:41:47.970895   78109 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:41:47.970886   78109 notify.go:220] Checking for updates...
	I1003 18:41:47.973237   78109 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:41:47.974502   78109 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:41:47.976050   78109 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:41:47.980621   78109 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:41:47.982098   78109 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:41:47.983693   78109 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:41:47.983786   78109 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:41:48.006894   78109 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:41:48.006973   78109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:41:48.059814   78109 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:41:48.049141525 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:41:48.059970   78109 docker.go:318] overlay module found
	I1003 18:41:48.061805   78109 out.go:179] * Using the docker driver based on existing profile
	I1003 18:41:48.063100   78109 start.go:304] selected driver: docker
	I1003 18:41:48.063116   78109 start.go:924] validating driver "docker" against &{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:41:48.063193   78109 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:41:48.063271   78109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:41:48.115735   78109 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:41:48.106263176 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:41:48.116398   78109 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:41:48.116429   78109 cni.go:84] Creating CNI manager for ""
	I1003 18:41:48.116479   78109 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 18:41:48.116522   78109 start.go:348] cluster config:
	{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1003 18:41:48.118414   78109 out.go:179] * Starting "ha-422561" primary control-plane node in "ha-422561" cluster
	I1003 18:41:48.119473   78109 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:41:48.120615   78109 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:41:48.121657   78109 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:41:48.121692   78109 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 18:41:48.121702   78109 cache.go:58] Caching tarball of preloaded images
	I1003 18:41:48.121752   78109 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:41:48.121806   78109 preload.go:233] Found /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1003 18:41:48.121822   78109 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 18:41:48.121972   78109 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:41:48.141259   78109 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 18:41:48.141277   78109 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 18:41:48.141293   78109 cache.go:232] Successfully downloaded all kic artifacts
	I1003 18:41:48.141322   78109 start.go:360] acquireMachinesLock for ha-422561: {Name:mk32fd04a5d9b5f89831583bab7d7527f4d187a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:41:48.141381   78109 start.go:364] duration metric: took 38.503µs to acquireMachinesLock for "ha-422561"
	I1003 18:41:48.141404   78109 start.go:96] Skipping create...Using existing machine configuration
	I1003 18:41:48.141413   78109 fix.go:54] fixHost starting: 
	I1003 18:41:48.141623   78109 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:41:48.158697   78109 fix.go:112] recreateIfNeeded on ha-422561: state=Stopped err=<nil>
	W1003 18:41:48.158732   78109 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 18:41:48.160525   78109 out.go:252] * Restarting existing docker container for "ha-422561" ...
	I1003 18:41:48.160596   78109 cli_runner.go:164] Run: docker start ha-422561
	I1003 18:41:48.389421   78109 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:41:48.408957   78109 kic.go:430] container "ha-422561" state is running.
	I1003 18:41:48.409388   78109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:41:48.427176   78109 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:41:48.427382   78109 machine.go:93] provisionDockerMachine start ...
	I1003 18:41:48.427434   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:48.444729   78109 main.go:141] libmachine: Using SSH client type: native
	I1003 18:41:48.444951   78109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1003 18:41:48.444963   78109 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 18:41:48.445521   78109 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57550->127.0.0.1:32788: read: connection reset by peer
	I1003 18:41:51.588813   78109 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:41:51.588840   78109 ubuntu.go:182] provisioning hostname "ha-422561"
	I1003 18:41:51.588902   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:51.606073   78109 main.go:141] libmachine: Using SSH client type: native
	I1003 18:41:51.606334   78109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1003 18:41:51.606352   78109 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-422561 && echo "ha-422561" | sudo tee /etc/hostname
	I1003 18:41:51.755889   78109 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:41:51.755972   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:51.773186   78109 main.go:141] libmachine: Using SSH client type: native
	I1003 18:41:51.773469   78109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1003 18:41:51.773496   78109 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422561' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422561/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422561' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:41:51.915364   78109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:41:51.915397   78109 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8669/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8669/.minikube}
	I1003 18:41:51.915442   78109 ubuntu.go:190] setting up certificates
	I1003 18:41:51.915453   78109 provision.go:84] configureAuth start
	I1003 18:41:51.915501   78109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:41:51.932304   78109 provision.go:143] copyHostCerts
	I1003 18:41:51.932336   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:41:51.932369   78109 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem, removing ...
	I1003 18:41:51.932384   78109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:41:51.932460   78109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem (1675 bytes)
	I1003 18:41:51.932569   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:41:51.932592   78109 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem, removing ...
	I1003 18:41:51.932601   78109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:41:51.932644   78109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem (1082 bytes)
	I1003 18:41:51.932737   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:41:51.932762   78109 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem, removing ...
	I1003 18:41:51.932770   78109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:41:51.932806   78109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem (1123 bytes)
	I1003 18:41:51.932897   78109 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem org=jenkins.ha-422561 san=[127.0.0.1 192.168.49.2 ha-422561 localhost minikube]
	I1003 18:41:52.334530   78109 provision.go:177] copyRemoteCerts
	I1003 18:41:52.334597   78109 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:41:52.334648   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:52.352292   78109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:52.453048   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 18:41:52.453101   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 18:41:52.469816   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 18:41:52.469876   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1003 18:41:52.486010   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 18:41:52.486070   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 18:41:52.501699   78109 provision.go:87] duration metric: took 586.232853ms to configureAuth
	I1003 18:41:52.501734   78109 ubuntu.go:206] setting minikube options for container-runtime
	I1003 18:41:52.501896   78109 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:41:52.502010   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:52.519621   78109 main.go:141] libmachine: Using SSH client type: native
	I1003 18:41:52.519864   78109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1003 18:41:52.519881   78109 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 18:41:52.769003   78109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 18:41:52.769026   78109 machine.go:96] duration metric: took 4.34163143s to provisionDockerMachine
	I1003 18:41:52.769048   78109 start.go:293] postStartSetup for "ha-422561" (driver="docker")
	I1003 18:41:52.769058   78109 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:41:52.769105   78109 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:41:52.769141   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:52.785506   78109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:52.886607   78109 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:41:52.890099   78109 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:41:52.890126   78109 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 18:41:52.890138   78109 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/addons for local assets ...
	I1003 18:41:52.890200   78109 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/files for local assets ...
	I1003 18:41:52.890302   78109 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> 122122.pem in /etc/ssl/certs
	I1003 18:41:52.890314   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /etc/ssl/certs/122122.pem
	I1003 18:41:52.890418   78109 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 18:41:52.897610   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:41:52.913799   78109 start.go:296] duration metric: took 144.73798ms for postStartSetup
	I1003 18:41:52.913880   78109 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:41:52.913916   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:52.931323   78109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:53.028846   78109 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:41:53.033147   78109 fix.go:56] duration metric: took 4.891729968s for fixHost
	I1003 18:41:53.033174   78109 start.go:83] releasing machines lock for "ha-422561", held for 4.891773851s
	I1003 18:41:53.033222   78109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:41:53.050737   78109 ssh_runner.go:195] Run: cat /version.json
	I1003 18:41:53.050798   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:53.050812   78109 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:41:53.050904   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:53.068768   78109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:53.069109   78109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:53.215897   78109 ssh_runner.go:195] Run: systemctl --version
	I1003 18:41:53.222143   78109 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 18:41:53.254998   78109 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 18:41:53.259516   78109 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 18:41:53.259571   78109 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 18:41:53.267402   78109 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1003 18:41:53.267422   78109 start.go:495] detecting cgroup driver to use...
	I1003 18:41:53.267447   78109 detect.go:190] detected "systemd" cgroup driver on host os
	I1003 18:41:53.267478   78109 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 18:41:53.280584   78109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 18:41:53.291928   78109 docker.go:218] disabling cri-docker service (if available) ...
	I1003 18:41:53.292007   78109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 18:41:53.305410   78109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 18:41:53.316686   78109 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 18:41:53.392708   78109 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 18:41:53.468550   78109 docker.go:234] disabling docker service ...
	I1003 18:41:53.468603   78109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 18:41:53.481912   78109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 18:41:53.493296   78109 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 18:41:53.564617   78109 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 18:41:53.641361   78109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 18:41:53.653265   78109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:41:53.666452   78109 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 18:41:53.666512   78109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:41:53.674871   78109 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 18:41:53.674918   78109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:41:53.682900   78109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:41:53.690672   78109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:41:53.698507   78109 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:41:53.705820   78109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:41:53.714091   78109 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:41:53.721884   78109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:41:53.729698   78109 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:41:53.736355   78109 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:41:53.743414   78109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:41:53.819717   78109 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 18:41:53.919600   78109 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 18:41:53.919651   78109 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 18:41:53.923478   78109 start.go:563] Will wait 60s for crictl version
	I1003 18:41:53.923531   78109 ssh_runner.go:195] Run: which crictl
	I1003 18:41:53.926886   78109 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 18:41:53.950693   78109 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 18:41:53.950780   78109 ssh_runner.go:195] Run: crio --version
	I1003 18:41:53.978079   78109 ssh_runner.go:195] Run: crio --version
	I1003 18:41:54.006095   78109 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 18:41:54.007432   78109 cli_runner.go:164] Run: docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:41:54.024727   78109 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 18:41:54.028676   78109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:41:54.038280   78109 kubeadm.go:883] updating cluster {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 18:41:54.038374   78109 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:41:54.038416   78109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:41:54.069216   78109 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:41:54.069235   78109 crio.go:433] Images already preloaded, skipping extraction
	I1003 18:41:54.069278   78109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:41:54.093835   78109 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:41:54.093853   78109 cache_images.go:85] Images are preloaded, skipping loading
	I1003 18:41:54.093861   78109 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1003 18:41:54.093958   78109 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422561 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 18:41:54.094039   78109 ssh_runner.go:195] Run: crio config
	I1003 18:41:54.139191   78109 cni.go:84] Creating CNI manager for ""
	I1003 18:41:54.139209   78109 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 18:41:54.139225   78109 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 18:41:54.139251   78109 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-422561 NodeName:ha-422561 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 18:41:54.139393   78109 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-422561"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:41:54.139467   78109 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 18:41:54.147298   78109 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:41:54.147347   78109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 18:41:54.154482   78109 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1003 18:41:54.165970   78109 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 18:41:54.177461   78109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1003 18:41:54.189120   78109 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1003 18:41:54.192398   78109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:41:54.201452   78109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:41:54.277696   78109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:41:54.301361   78109 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561 for IP: 192.168.49.2
	I1003 18:41:54.301380   78109 certs.go:195] generating shared ca certs ...
	I1003 18:41:54.301396   78109 certs.go:227] acquiring lock for ca certs: {Name:mk92d1e8e469cb44d9924ff8abf5ecf0a8ce4e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:41:54.301531   78109 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key
	I1003 18:41:54.301567   78109 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key
	I1003 18:41:54.301574   78109 certs.go:257] generating profile certs ...
	I1003 18:41:54.301678   78109 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key
	I1003 18:41:54.301704   78109 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2ce2e456
	I1003 18:41:54.301719   78109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2ce2e456 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1003 18:41:54.485656   78109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2ce2e456 ...
	I1003 18:41:54.485682   78109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2ce2e456: {Name:mkd64166271c8ed4363a27c4beb22c76efb402ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:41:54.485857   78109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2ce2e456 ...
	I1003 18:41:54.485874   78109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2ce2e456: {Name:mk21609dadb3006e0ff5fcda633cac720af9cd26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:41:54.485999   78109 certs.go:382] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2ce2e456 -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt
	I1003 18:41:54.486165   78109 certs.go:386] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2ce2e456 -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key
	I1003 18:41:54.486296   78109 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key
	I1003 18:41:54.486314   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 18:41:54.486329   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 18:41:54.486342   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 18:41:54.486355   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 18:41:54.486366   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 18:41:54.486378   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 18:41:54.486390   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 18:41:54.486400   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 18:41:54.486447   78109 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem (1338 bytes)
	W1003 18:41:54.486488   78109 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212_empty.pem, impossibly tiny 0 bytes
	I1003 18:41:54.486499   78109 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:41:54.486520   78109 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:41:54.486541   78109 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:41:54.486562   78109 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem (1675 bytes)
	I1003 18:41:54.486601   78109 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:41:54.486625   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /usr/share/ca-certificates/122122.pem
	I1003 18:41:54.486639   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:41:54.486651   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem -> /usr/share/ca-certificates/12212.pem
	I1003 18:41:54.487214   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:41:54.504245   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 18:41:54.520954   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:41:54.537040   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:41:54.552996   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1003 18:41:54.568727   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:41:54.584994   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:41:54.600897   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 18:41:54.616824   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /usr/share/ca-certificates/122122.pem (1708 bytes)
	I1003 18:41:54.632722   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:41:54.648244   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem --> /usr/share/ca-certificates/12212.pem (1338 bytes)
	I1003 18:41:54.663803   78109 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:41:54.675418   78109 ssh_runner.go:195] Run: openssl version
	I1003 18:41:54.681349   78109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122122.pem && ln -fs /usr/share/ca-certificates/122122.pem /etc/ssl/certs/122122.pem"
	I1003 18:41:54.689100   78109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122122.pem
	I1003 18:41:54.692442   78109 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:41:54.692485   78109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122122.pem
	I1003 18:41:54.725859   78109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122122.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:41:54.733505   78109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:41:54.741265   78109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:41:54.744606   78109 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:41:54.744646   78109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:41:54.777788   78109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:41:54.785887   78109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12212.pem && ln -fs /usr/share/ca-certificates/12212.pem /etc/ssl/certs/12212.pem"
	I1003 18:41:54.795297   78109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12212.pem
	I1003 18:41:54.799237   78109 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:41:54.799288   78109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12212.pem
	I1003 18:41:54.846396   78109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12212.pem /etc/ssl/certs/51391683.0"
	I1003 18:41:54.855755   78109 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:41:54.860752   78109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 18:41:54.896634   78109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 18:41:54.930605   78109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 18:41:54.965096   78109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 18:41:54.998440   78109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 18:41:55.031641   78109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 18:41:55.065037   78109 kubeadm.go:400] StartCluster: {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:41:55.065123   78109 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:41:55.065170   78109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:41:55.091392   78109 cri.go:89] found id: ""
	I1003 18:41:55.091469   78109 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:41:55.099200   78109 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1003 18:41:55.099217   78109 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1003 18:41:55.099258   78109 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 18:41:55.106032   78109 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:41:55.106375   78109 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:41:55.106505   78109 kubeconfig.go:62] /home/jenkins/minikube-integration/21625-8669/kubeconfig needs updating (will repair): [kubeconfig missing "ha-422561" cluster setting kubeconfig missing "ha-422561" context setting]
	I1003 18:41:55.106770   78109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/kubeconfig: {Name:mk6b7939515483ba69c1f358a3a21494f4ead7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:41:55.107315   78109 kapi.go:59] client config for ha-422561: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key", CAFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 18:41:55.107724   78109 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1003 18:41:55.107739   78109 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1003 18:41:55.107743   78109 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1003 18:41:55.107747   78109 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1003 18:41:55.107750   78109 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1003 18:41:55.107810   78109 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1003 18:41:55.108143   78109 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 18:41:55.114940   78109 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1003 18:41:55.114964   78109 kubeadm.go:601] duration metric: took 15.74296ms to restartPrimaryControlPlane
	I1003 18:41:55.114971   78109 kubeadm.go:402] duration metric: took 49.946332ms to StartCluster
	I1003 18:41:55.115005   78109 settings.go:142] acquiring lock: {Name:mk6bc950503a8f341b8aacc07a8bc72d5db3a25c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:41:55.115056   78109 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:41:55.115531   78109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/kubeconfig: {Name:mk6b7939515483ba69c1f358a3a21494f4ead7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:41:55.115741   78109 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 18:41:55.115824   78109 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 18:41:55.115919   78109 addons.go:69] Setting storage-provisioner=true in profile "ha-422561"
	I1003 18:41:55.115938   78109 addons.go:238] Setting addon storage-provisioner=true in "ha-422561"
	I1003 18:41:55.115942   78109 addons.go:69] Setting default-storageclass=true in profile "ha-422561"
	I1003 18:41:55.115958   78109 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:41:55.115972   78109 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-422561"
	I1003 18:41:55.115989   78109 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:41:55.116225   78109 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:41:55.116452   78109 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:41:55.119136   78109 out.go:179] * Verifying Kubernetes components...
	I1003 18:41:55.120238   78109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:41:55.134787   78109 kapi.go:59] client config for ha-422561: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key", CAFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 18:41:55.135133   78109 addons.go:238] Setting addon default-storageclass=true in "ha-422561"
	I1003 18:41:55.135168   78109 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:41:55.135538   78109 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:41:55.137640   78109 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 18:41:55.138668   78109 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:41:55.138683   78109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 18:41:55.138728   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:55.162278   78109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:55.162572   78109 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 18:41:55.162597   78109 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 18:41:55.163241   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:55.182395   78109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:55.225739   78109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:41:55.238233   78109 node_ready.go:35] waiting up to 6m0s for node "ha-422561" to be "Ready" ...
	I1003 18:41:55.270587   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:41:55.287076   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:41:55.326555   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.326612   78109 retry.go:31] will retry after 238.443182ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:41:55.340406   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.340437   78109 retry.go:31] will retry after 153.323458ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.494856   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:41:55.546128   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.546164   78109 retry.go:31] will retry after 276.912874ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.565279   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:41:55.615128   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.615158   78109 retry.go:31] will retry after 342.439843ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.823993   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:41:55.875529   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.875561   78109 retry.go:31] will retry after 400.772518ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.957790   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:41:56.007576   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:56.007610   78109 retry.go:31] will retry after 687.440576ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:56.276587   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:41:56.327516   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:56.327545   78109 retry.go:31] will retry after 708.287937ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:56.696027   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:41:56.746649   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:56.746684   78109 retry.go:31] will retry after 518.211932ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:57.036088   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:41:57.086704   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:57.086738   78109 retry.go:31] will retry after 1.376791265s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:41:57.239372   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:41:57.265499   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:41:57.317068   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:57.317108   78109 retry.go:31] will retry after 1.177919083s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:58.464531   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:41:58.496033   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:41:58.515496   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:58.515532   78109 retry.go:31] will retry after 2.33145046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:41:58.546625   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:58.546674   78109 retry.go:31] will retry after 1.629869087s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:41:59.239446   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:00.176874   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:42:00.227112   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:00.227140   78109 retry.go:31] will retry after 3.908061892s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:00.847842   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:42:00.898437   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:00.898463   78109 retry.go:31] will retry after 4.123747597s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:01.739288   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:04.135743   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:42:04.186702   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:04.186732   78109 retry.go:31] will retry after 3.995977252s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:04.239305   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:05.022578   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:42:05.073779   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:05.073811   78109 retry.go:31] will retry after 4.388328001s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:06.738802   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:08.183159   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:42:08.234120   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:08.234149   78109 retry.go:31] will retry after 3.547774861s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:08.739679   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:09.463080   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:42:09.513268   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:09.513304   78109 retry.go:31] will retry after 8.911463673s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:11.238822   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:11.782937   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:42:11.834357   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:11.834385   78109 retry.go:31] will retry after 8.693528714s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:13.239500   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:15.239549   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:17.739446   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:18.424887   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:42:18.475151   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:18.475186   78109 retry.go:31] will retry after 7.904227635s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:20.239011   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:20.528449   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:42:20.580777   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:20.580809   78109 retry.go:31] will retry after 20.11601788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:22.738834   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:24.739199   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:26.379921   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:42:26.431319   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:26.431348   78109 retry.go:31] will retry after 20.573768491s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:27.239280   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:29.738800   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:31.739121   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:33.739413   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:36.238812   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:38.238926   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:40.239194   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:40.697768   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:42:40.749547   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:40.749578   78109 retry.go:31] will retry after 30.248373016s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:42.239773   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:44.739009   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:46.739534   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:47.005919   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:42:47.057465   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:47.057491   78109 retry.go:31] will retry after 12.288685106s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:49.239699   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:51.739043   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:53.739508   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:56.238897   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:58.239429   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:59.346896   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:42:59.401998   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:59.402035   78109 retry.go:31] will retry after 35.671655983s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:43:00.239715   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:02.239754   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:04.239822   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:06.739643   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:08.739717   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:43:10.998923   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:43:11.051273   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:43:11.051306   78109 retry.go:31] will retry after 26.001187567s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:43:11.238952   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:13.239575   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:15.738938   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:17.739263   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:20.238878   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:22.239089   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:24.239684   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:26.738761   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:28.738919   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:30.739086   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:32.739359   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:34.739740   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:43:35.074123   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:43:35.124772   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:43:35.124883   78109 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1003 18:43:37.053183   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:43:37.104592   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:43:37.104700   78109 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1003 18:43:37.106533   78109 out.go:179] * Enabled addons: 
	I1003 18:43:37.107764   78109 addons.go:514] duration metric: took 1m41.991949037s for enable addons: enabled=[]
	W1003 18:43:37.239332   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:39.738898   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:42.238941   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:44.239082   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:46.239268   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:48.239582   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:50.738800   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:52.738881   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:54.739056   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:57.239071   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:59.239207   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:01.239478   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:03.738847   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:05.739101   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:07.739198   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:09.739482   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:12.238792   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:14.238963   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:16.239203   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:18.239564   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:20.738823   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:22.738917   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:24.739018   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:26.739400   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:28.739723   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:31.238840   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:33.239009   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:35.239259   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:37.239746   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:39.739042   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:41.739269   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:43.739600   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:46.238810   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:48.238919   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:50.239098   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:52.739028   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:54.739369   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:56.739650   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:59.238815   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:01.238933   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:03.239302   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:05.239684   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:07.738918   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:09.739100   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:11.739372   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:13.739747   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:16.238933   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:18.239051   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:20.239294   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:22.239715   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:24.738810   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:26.739034   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:28.739364   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:30.739770   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:33.239063   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:35.239425   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:37.239744   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:39.739151   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:41.739685   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:44.239046   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:46.239503   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:48.738957   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:50.739269   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:52.739747   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:55.239459   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:57.739152   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:59.739697   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:02.238935   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:04.239446   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:06.738747   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:08.738816   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:10.738937   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:12.739182   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:14.739698   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:17.238816   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:19.239006   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:21.239256   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:23.239603   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:25.739083   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:28.238903   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:30.239210   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:32.239740   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:34.738942   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:36.739260   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:38.739610   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:41.238773   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:43.239024   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:45.239316   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:47.239690   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:49.738813   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:52.238811   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:54.238890   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:56.239083   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:58.239334   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:00.239577   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:02.738811   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:04.739001   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:06.739758   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:09.239542   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:11.239643   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:13.738883   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:15.739070   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:17.739168   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:19.739551   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:22.238767   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:24.238867   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:26.239004   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:28.239235   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:30.239573   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:32.738879   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:34.738922   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:36.739197   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:38.739499   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:40.739749   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:43.238901   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:45.239199   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:47.239460   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:49.738763   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:51.738856   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:53.739191   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:47:55.238470   78109 node_ready.go:38] duration metric: took 6m0.000189393s for node "ha-422561" to be "Ready" ...
	I1003 18:47:55.241057   78109 out.go:203] 
	W1003 18:47:55.242227   78109 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1003 18:47:55.242242   78109 out.go:285] * 
	* 
	W1003 18:47:55.243958   78109 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:47:55.245321   78109 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-amd64 -p ha-422561 node list --alsologtostderr -v 5" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 node list --alsologtostderr -v 5
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-422561
helpers_test.go:243: (dbg) docker inspect ha-422561:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	        "Created": "2025-10-03T18:31:00.396132938Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 78305,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T18:41:48.184631345Z",
	            "FinishedAt": "2025-10-03T18:41:47.03312274Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hostname",
	        "HostsPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hosts",
	        "LogPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512-json.log",
	        "Name": "/ha-422561",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-422561:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-422561",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	                "LowerDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94-init/diff:/var/lib/docker/overlay2/6a517a7375440eba803d7b83fe1e0821915758396dd4d8556ab64fff322a60c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-422561",
	                "Source": "/var/lib/docker/volumes/ha-422561/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-422561",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-422561",
	                "name.minikube.sigs.k8s.io": "ha-422561",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f25e24c6846c3066ef61f48e15ea0bd5d93f4d074a9989652f5f017953ae54f4",
	            "SandboxKey": "/var/run/docker/netns/f25e24c6846c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-422561": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:25:3d:05:0c:10",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de6aa7ca29f453c0d15cb280abde7ee215f554c89e78e3db8a0f7590468114b5",
	                    "EndpointID": "ea9c702790bd5592b9af12355b48fa038276e1385318d9f8348f8ea08c72f59c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-422561",
	                        "eef8fc426b2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561: exit status 2 (301.842377ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ ha-422561 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:30 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- rollout status deployment/busybox                                                          │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node add --alsologtostderr -v 5                                                                       │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node stop m02 --alsologtostderr -v 5                                                                  │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node start m02 --alsologtostderr -v 5                                                                 │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:41 UTC │                     │
	│ node    │ ha-422561 node list --alsologtostderr -v 5                                                                      │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:41 UTC │                     │
	│ stop    │ ha-422561 stop --alsologtostderr -v 5                                                                           │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:41 UTC │ 03 Oct 25 18:41 UTC │
	│ start   │ ha-422561 start --wait true --alsologtostderr -v 5                                                              │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:41 UTC │                     │
	│ node    │ ha-422561 node list --alsologtostderr -v 5                                                                      │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:47 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:41:47
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:41:47.965617   78109 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:41:47.965729   78109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:41:47.965734   78109 out.go:374] Setting ErrFile to fd 2...
	I1003 18:41:47.965738   78109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:41:47.965965   78109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:41:47.966407   78109 out.go:368] Setting JSON to false
	I1003 18:41:47.967236   78109 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5059,"bootTime":1759511849,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:41:47.967316   78109 start.go:140] virtualization: kvm guest
	I1003 18:41:47.969565   78109 out.go:179] * [ha-422561] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 18:41:47.970895   78109 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:41:47.970886   78109 notify.go:220] Checking for updates...
	I1003 18:41:47.973237   78109 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:41:47.974502   78109 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:41:47.976050   78109 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:41:47.980621   78109 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:41:47.982098   78109 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:41:47.983693   78109 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:41:47.983786   78109 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:41:48.006894   78109 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:41:48.006973   78109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:41:48.059814   78109 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:41:48.049141525 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:41:48.059970   78109 docker.go:318] overlay module found
	I1003 18:41:48.061805   78109 out.go:179] * Using the docker driver based on existing profile
	I1003 18:41:48.063100   78109 start.go:304] selected driver: docker
	I1003 18:41:48.063116   78109 start.go:924] validating driver "docker" against &{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:41:48.063193   78109 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:41:48.063271   78109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:41:48.115735   78109 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:41:48.106263176 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:41:48.116398   78109 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:41:48.116429   78109 cni.go:84] Creating CNI manager for ""
	I1003 18:41:48.116479   78109 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 18:41:48.116522   78109 start.go:348] cluster config:
	{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1003 18:41:48.118414   78109 out.go:179] * Starting "ha-422561" primary control-plane node in "ha-422561" cluster
	I1003 18:41:48.119473   78109 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:41:48.120615   78109 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:41:48.121657   78109 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:41:48.121692   78109 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 18:41:48.121702   78109 cache.go:58] Caching tarball of preloaded images
	I1003 18:41:48.121752   78109 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:41:48.121806   78109 preload.go:233] Found /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1003 18:41:48.121822   78109 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 18:41:48.121972   78109 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:41:48.141259   78109 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 18:41:48.141277   78109 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 18:41:48.141293   78109 cache.go:232] Successfully downloaded all kic artifacts
	I1003 18:41:48.141322   78109 start.go:360] acquireMachinesLock for ha-422561: {Name:mk32fd04a5d9b5f89831583bab7d7527f4d187a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:41:48.141381   78109 start.go:364] duration metric: took 38.503µs to acquireMachinesLock for "ha-422561"
	I1003 18:41:48.141404   78109 start.go:96] Skipping create...Using existing machine configuration
	I1003 18:41:48.141413   78109 fix.go:54] fixHost starting: 
	I1003 18:41:48.141623   78109 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:41:48.158697   78109 fix.go:112] recreateIfNeeded on ha-422561: state=Stopped err=<nil>
	W1003 18:41:48.158732   78109 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 18:41:48.160525   78109 out.go:252] * Restarting existing docker container for "ha-422561" ...
	I1003 18:41:48.160596   78109 cli_runner.go:164] Run: docker start ha-422561
	I1003 18:41:48.389421   78109 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:41:48.408957   78109 kic.go:430] container "ha-422561" state is running.
	I1003 18:41:48.409388   78109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:41:48.427176   78109 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:41:48.427382   78109 machine.go:93] provisionDockerMachine start ...
	I1003 18:41:48.427434   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:48.444729   78109 main.go:141] libmachine: Using SSH client type: native
	I1003 18:41:48.444951   78109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1003 18:41:48.444963   78109 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 18:41:48.445521   78109 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57550->127.0.0.1:32788: read: connection reset by peer
	I1003 18:41:51.588813   78109 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:41:51.588840   78109 ubuntu.go:182] provisioning hostname "ha-422561"
	I1003 18:41:51.588902   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:51.606073   78109 main.go:141] libmachine: Using SSH client type: native
	I1003 18:41:51.606334   78109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1003 18:41:51.606352   78109 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-422561 && echo "ha-422561" | sudo tee /etc/hostname
	I1003 18:41:51.755889   78109 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:41:51.755972   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:51.773186   78109 main.go:141] libmachine: Using SSH client type: native
	I1003 18:41:51.773469   78109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1003 18:41:51.773496   78109 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422561' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422561/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422561' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:41:51.915364   78109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:41:51.915397   78109 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8669/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8669/.minikube}
	I1003 18:41:51.915442   78109 ubuntu.go:190] setting up certificates
	I1003 18:41:51.915453   78109 provision.go:84] configureAuth start
	I1003 18:41:51.915501   78109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:41:51.932304   78109 provision.go:143] copyHostCerts
	I1003 18:41:51.932336   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:41:51.932369   78109 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem, removing ...
	I1003 18:41:51.932384   78109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:41:51.932460   78109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem (1675 bytes)
	I1003 18:41:51.932569   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:41:51.932592   78109 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem, removing ...
	I1003 18:41:51.932601   78109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:41:51.932644   78109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem (1082 bytes)
	I1003 18:41:51.932737   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:41:51.932762   78109 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem, removing ...
	I1003 18:41:51.932770   78109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:41:51.932806   78109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem (1123 bytes)
	I1003 18:41:51.932897   78109 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem org=jenkins.ha-422561 san=[127.0.0.1 192.168.49.2 ha-422561 localhost minikube]
	I1003 18:41:52.334530   78109 provision.go:177] copyRemoteCerts
	I1003 18:41:52.334597   78109 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:41:52.334648   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:52.352292   78109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:52.453048   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 18:41:52.453101   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 18:41:52.469816   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 18:41:52.469876   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1003 18:41:52.486010   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 18:41:52.486070   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 18:41:52.501699   78109 provision.go:87] duration metric: took 586.232853ms to configureAuth
	I1003 18:41:52.501734   78109 ubuntu.go:206] setting minikube options for container-runtime
	I1003 18:41:52.501896   78109 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:41:52.502010   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:52.519621   78109 main.go:141] libmachine: Using SSH client type: native
	I1003 18:41:52.519864   78109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1003 18:41:52.519881   78109 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 18:41:52.769003   78109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 18:41:52.769026   78109 machine.go:96] duration metric: took 4.34163143s to provisionDockerMachine
	I1003 18:41:52.769048   78109 start.go:293] postStartSetup for "ha-422561" (driver="docker")
	I1003 18:41:52.769058   78109 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:41:52.769105   78109 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:41:52.769141   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:52.785506   78109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:52.886607   78109 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:41:52.890099   78109 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:41:52.890126   78109 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 18:41:52.890138   78109 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/addons for local assets ...
	I1003 18:41:52.890200   78109 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/files for local assets ...
	I1003 18:41:52.890302   78109 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> 122122.pem in /etc/ssl/certs
	I1003 18:41:52.890314   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /etc/ssl/certs/122122.pem
	I1003 18:41:52.890418   78109 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 18:41:52.897610   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:41:52.913799   78109 start.go:296] duration metric: took 144.73798ms for postStartSetup
	I1003 18:41:52.913880   78109 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:41:52.913916   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:52.931323   78109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:53.028846   78109 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:41:53.033147   78109 fix.go:56] duration metric: took 4.891729968s for fixHost
	I1003 18:41:53.033174   78109 start.go:83] releasing machines lock for "ha-422561", held for 4.891773851s
	I1003 18:41:53.033222   78109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:41:53.050737   78109 ssh_runner.go:195] Run: cat /version.json
	I1003 18:41:53.050798   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:53.050812   78109 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:41:53.050904   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:53.068768   78109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:53.069109   78109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:53.215897   78109 ssh_runner.go:195] Run: systemctl --version
	I1003 18:41:53.222143   78109 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 18:41:53.254998   78109 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 18:41:53.259516   78109 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 18:41:53.259571   78109 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 18:41:53.267402   78109 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1003 18:41:53.267422   78109 start.go:495] detecting cgroup driver to use...
	I1003 18:41:53.267447   78109 detect.go:190] detected "systemd" cgroup driver on host os
	I1003 18:41:53.267478   78109 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 18:41:53.280584   78109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 18:41:53.291928   78109 docker.go:218] disabling cri-docker service (if available) ...
	I1003 18:41:53.292007   78109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 18:41:53.305410   78109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 18:41:53.316686   78109 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 18:41:53.392708   78109 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 18:41:53.468550   78109 docker.go:234] disabling docker service ...
	I1003 18:41:53.468603   78109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 18:41:53.481912   78109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 18:41:53.493296   78109 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 18:41:53.564617   78109 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 18:41:53.641361   78109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 18:41:53.653265   78109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:41:53.666452   78109 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 18:41:53.666512   78109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:41:53.674871   78109 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 18:41:53.674918   78109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:41:53.682900   78109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:41:53.690672   78109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:41:53.698507   78109 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:41:53.705820   78109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:41:53.714091   78109 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:41:53.721884   78109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:41:53.729698   78109 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:41:53.736355   78109 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:41:53.743414   78109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:41:53.819717   78109 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 18:41:53.919600   78109 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 18:41:53.919651   78109 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 18:41:53.923478   78109 start.go:563] Will wait 60s for crictl version
	I1003 18:41:53.923531   78109 ssh_runner.go:195] Run: which crictl
	I1003 18:41:53.926886   78109 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 18:41:53.950693   78109 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 18:41:53.950780   78109 ssh_runner.go:195] Run: crio --version
	I1003 18:41:53.978079   78109 ssh_runner.go:195] Run: crio --version
	I1003 18:41:54.006095   78109 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 18:41:54.007432   78109 cli_runner.go:164] Run: docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:41:54.024727   78109 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 18:41:54.028676   78109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:41:54.038280   78109 kubeadm.go:883] updating cluster {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 18:41:54.038374   78109 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:41:54.038416   78109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:41:54.069216   78109 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:41:54.069235   78109 crio.go:433] Images already preloaded, skipping extraction
	I1003 18:41:54.069278   78109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:41:54.093835   78109 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:41:54.093853   78109 cache_images.go:85] Images are preloaded, skipping loading
	I1003 18:41:54.093861   78109 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1003 18:41:54.093958   78109 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422561 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 18:41:54.094039   78109 ssh_runner.go:195] Run: crio config
	I1003 18:41:54.139191   78109 cni.go:84] Creating CNI manager for ""
	I1003 18:41:54.139209   78109 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 18:41:54.139225   78109 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 18:41:54.139251   78109 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-422561 NodeName:ha-422561 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 18:41:54.139393   78109 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-422561"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:41:54.139467   78109 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 18:41:54.147298   78109 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:41:54.147347   78109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 18:41:54.154482   78109 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1003 18:41:54.165970   78109 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 18:41:54.177461   78109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1003 18:41:54.189120   78109 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1003 18:41:54.192398   78109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:41:54.201452   78109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:41:54.277696   78109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:41:54.301361   78109 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561 for IP: 192.168.49.2
	I1003 18:41:54.301380   78109 certs.go:195] generating shared ca certs ...
	I1003 18:41:54.301396   78109 certs.go:227] acquiring lock for ca certs: {Name:mk92d1e8e469cb44d9924ff8abf5ecf0a8ce4e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:41:54.301531   78109 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key
	I1003 18:41:54.301567   78109 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key
	I1003 18:41:54.301574   78109 certs.go:257] generating profile certs ...
	I1003 18:41:54.301678   78109 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key
	I1003 18:41:54.301704   78109 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2ce2e456
	I1003 18:41:54.301719   78109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2ce2e456 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1003 18:41:54.485656   78109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2ce2e456 ...
	I1003 18:41:54.485682   78109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2ce2e456: {Name:mkd64166271c8ed4363a27c4beb22c76efb402ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:41:54.485857   78109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2ce2e456 ...
	I1003 18:41:54.485874   78109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2ce2e456: {Name:mk21609dadb3006e0ff5fcda633cac720af9cd26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:41:54.485999   78109 certs.go:382] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2ce2e456 -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt
	I1003 18:41:54.486165   78109 certs.go:386] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2ce2e456 -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key
	I1003 18:41:54.486296   78109 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key
	I1003 18:41:54.486314   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 18:41:54.486329   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 18:41:54.486342   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 18:41:54.486355   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 18:41:54.486366   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 18:41:54.486378   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 18:41:54.486390   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 18:41:54.486400   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 18:41:54.486447   78109 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem (1338 bytes)
	W1003 18:41:54.486488   78109 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212_empty.pem, impossibly tiny 0 bytes
	I1003 18:41:54.486499   78109 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:41:54.486520   78109 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:41:54.486541   78109 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:41:54.486562   78109 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem (1675 bytes)
	I1003 18:41:54.486601   78109 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:41:54.486625   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /usr/share/ca-certificates/122122.pem
	I1003 18:41:54.486639   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:41:54.486651   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem -> /usr/share/ca-certificates/12212.pem
	I1003 18:41:54.487214   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:41:54.504245   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 18:41:54.520954   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:41:54.537040   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:41:54.552996   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1003 18:41:54.568727   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:41:54.584994   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:41:54.600897   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 18:41:54.616824   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /usr/share/ca-certificates/122122.pem (1708 bytes)
	I1003 18:41:54.632722   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:41:54.648244   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem --> /usr/share/ca-certificates/12212.pem (1338 bytes)
	I1003 18:41:54.663803   78109 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:41:54.675418   78109 ssh_runner.go:195] Run: openssl version
	I1003 18:41:54.681349   78109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122122.pem && ln -fs /usr/share/ca-certificates/122122.pem /etc/ssl/certs/122122.pem"
	I1003 18:41:54.689100   78109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122122.pem
	I1003 18:41:54.692442   78109 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:41:54.692485   78109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122122.pem
	I1003 18:41:54.725859   78109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122122.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:41:54.733505   78109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:41:54.741265   78109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:41:54.744606   78109 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:41:54.744646   78109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:41:54.777788   78109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:41:54.785887   78109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12212.pem && ln -fs /usr/share/ca-certificates/12212.pem /etc/ssl/certs/12212.pem"
	I1003 18:41:54.795297   78109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12212.pem
	I1003 18:41:54.799237   78109 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:41:54.799288   78109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12212.pem
	I1003 18:41:54.846396   78109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12212.pem /etc/ssl/certs/51391683.0"
	I1003 18:41:54.855755   78109 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:41:54.860752   78109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 18:41:54.896634   78109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 18:41:54.930605   78109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 18:41:54.965096   78109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 18:41:54.998440   78109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 18:41:55.031641   78109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 18:41:55.065037   78109 kubeadm.go:400] StartCluster: {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:41:55.065123   78109 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:41:55.065170   78109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:41:55.091392   78109 cri.go:89] found id: ""
	I1003 18:41:55.091469   78109 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:41:55.099200   78109 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1003 18:41:55.099217   78109 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1003 18:41:55.099258   78109 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 18:41:55.106032   78109 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:41:55.106375   78109 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:41:55.106505   78109 kubeconfig.go:62] /home/jenkins/minikube-integration/21625-8669/kubeconfig needs updating (will repair): [kubeconfig missing "ha-422561" cluster setting kubeconfig missing "ha-422561" context setting]
	I1003 18:41:55.106770   78109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/kubeconfig: {Name:mk6b7939515483ba69c1f358a3a21494f4ead7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:41:55.107315   78109 kapi.go:59] client config for ha-422561: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key", CAFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 18:41:55.107724   78109 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1003 18:41:55.107739   78109 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1003 18:41:55.107743   78109 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1003 18:41:55.107747   78109 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1003 18:41:55.107750   78109 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1003 18:41:55.107810   78109 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1003 18:41:55.108143   78109 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 18:41:55.114940   78109 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1003 18:41:55.114964   78109 kubeadm.go:601] duration metric: took 15.74296ms to restartPrimaryControlPlane
	I1003 18:41:55.114971   78109 kubeadm.go:402] duration metric: took 49.946332ms to StartCluster
	I1003 18:41:55.115005   78109 settings.go:142] acquiring lock: {Name:mk6bc950503a8f341b8aacc07a8bc72d5db3a25c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:41:55.115056   78109 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:41:55.115531   78109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/kubeconfig: {Name:mk6b7939515483ba69c1f358a3a21494f4ead7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:41:55.115741   78109 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 18:41:55.115824   78109 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 18:41:55.115919   78109 addons.go:69] Setting storage-provisioner=true in profile "ha-422561"
	I1003 18:41:55.115938   78109 addons.go:238] Setting addon storage-provisioner=true in "ha-422561"
	I1003 18:41:55.115942   78109 addons.go:69] Setting default-storageclass=true in profile "ha-422561"
	I1003 18:41:55.115958   78109 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:41:55.115972   78109 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-422561"
	I1003 18:41:55.115989   78109 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:41:55.116225   78109 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:41:55.116452   78109 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:41:55.119136   78109 out.go:179] * Verifying Kubernetes components...
	I1003 18:41:55.120238   78109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:41:55.134787   78109 kapi.go:59] client config for ha-422561: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key", CAFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 18:41:55.135133   78109 addons.go:238] Setting addon default-storageclass=true in "ha-422561"
	I1003 18:41:55.135168   78109 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:41:55.135538   78109 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:41:55.137640   78109 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 18:41:55.138668   78109 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:41:55.138683   78109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 18:41:55.138728   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:55.162278   78109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:55.162572   78109 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 18:41:55.162597   78109 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 18:41:55.163241   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:55.182395   78109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:55.225739   78109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:41:55.238233   78109 node_ready.go:35] waiting up to 6m0s for node "ha-422561" to be "Ready" ...
	I1003 18:41:55.270587   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:41:55.287076   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:41:55.326555   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.326612   78109 retry.go:31] will retry after 238.443182ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:41:55.340406   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.340437   78109 retry.go:31] will retry after 153.323458ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.494856   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:41:55.546128   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.546164   78109 retry.go:31] will retry after 276.912874ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.565279   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:41:55.615128   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.615158   78109 retry.go:31] will retry after 342.439843ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.823993   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:41:55.875529   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.875561   78109 retry.go:31] will retry after 400.772518ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.957790   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:41:56.007576   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:56.007610   78109 retry.go:31] will retry after 687.440576ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:56.276587   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:41:56.327516   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:56.327545   78109 retry.go:31] will retry after 708.287937ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:56.696027   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:41:56.746649   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:56.746684   78109 retry.go:31] will retry after 518.211932ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:57.036088   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:41:57.086704   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:57.086738   78109 retry.go:31] will retry after 1.376791265s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:41:57.239372   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:41:57.265499   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:41:57.317068   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:57.317108   78109 retry.go:31] will retry after 1.177919083s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:58.464531   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:41:58.496033   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:41:58.515496   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:58.515532   78109 retry.go:31] will retry after 2.33145046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:41:58.546625   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:58.546674   78109 retry.go:31] will retry after 1.629869087s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:41:59.239446   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:00.176874   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:42:00.227112   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:00.227140   78109 retry.go:31] will retry after 3.908061892s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:00.847842   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:42:00.898437   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:00.898463   78109 retry.go:31] will retry after 4.123747597s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:01.739288   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:04.135743   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:42:04.186702   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:04.186732   78109 retry.go:31] will retry after 3.995977252s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:04.239305   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:05.022578   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:42:05.073779   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:05.073811   78109 retry.go:31] will retry after 4.388328001s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:06.738802   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:08.183159   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:42:08.234120   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:08.234149   78109 retry.go:31] will retry after 3.547774861s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:08.739679   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:09.463080   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:42:09.513268   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:09.513304   78109 retry.go:31] will retry after 8.911463673s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:11.238822   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:11.782937   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:42:11.834357   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:11.834385   78109 retry.go:31] will retry after 8.693528714s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:13.239500   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:15.239549   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:17.739446   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:18.424887   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:42:18.475151   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:18.475186   78109 retry.go:31] will retry after 7.904227635s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:20.239011   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:20.528449   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:42:20.580777   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:20.580809   78109 retry.go:31] will retry after 20.11601788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:22.738834   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:24.739199   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:26.379921   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:42:26.431319   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:26.431348   78109 retry.go:31] will retry after 20.573768491s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:27.239280   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:29.738800   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:31.739121   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:33.739413   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:36.238812   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:38.238926   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:40.239194   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:40.697768   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:42:40.749547   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:40.749578   78109 retry.go:31] will retry after 30.248373016s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:42.239773   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:44.739009   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:46.739534   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:47.005919   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:42:47.057465   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:47.057491   78109 retry.go:31] will retry after 12.288685106s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:49.239699   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:51.739043   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:53.739508   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:56.238897   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:58.239429   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:59.346896   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:42:59.401998   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:59.402035   78109 retry.go:31] will retry after 35.671655983s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:43:00.239715   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:02.239754   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:04.239822   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:06.739643   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:08.739717   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:43:10.998923   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:43:11.051273   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:43:11.051306   78109 retry.go:31] will retry after 26.001187567s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:43:11.238952   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:13.239575   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:15.738938   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:17.739263   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:20.238878   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:22.239089   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:24.239684   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:26.738761   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:28.738919   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:30.739086   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:32.739359   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:34.739740   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:43:35.074123   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:43:35.124772   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:43:35.124883   78109 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1003 18:43:37.053183   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:43:37.104592   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:43:37.104700   78109 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1003 18:43:37.106533   78109 out.go:179] * Enabled addons: 
	I1003 18:43:37.107764   78109 addons.go:514] duration metric: took 1m41.991949037s for enable addons: enabled=[]
	W1003 18:43:37.239332   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:39.738898   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:42.238941   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:44.239082   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:46.239268   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:48.239582   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:50.738800   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:52.738881   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:54.739056   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:57.239071   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:59.239207   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:01.239478   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:03.738847   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:05.739101   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:07.739198   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:09.739482   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:12.238792   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:14.238963   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:16.239203   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:18.239564   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:20.738823   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:22.738917   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:24.739018   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:26.739400   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:28.739723   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:31.238840   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:33.239009   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:35.239259   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:37.239746   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:39.739042   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:41.739269   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:43.739600   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:46.238810   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:48.238919   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:50.239098   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:52.739028   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:54.739369   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:56.739650   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:59.238815   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:01.238933   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:03.239302   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:05.239684   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:07.738918   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:09.739100   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:11.739372   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:13.739747   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:16.238933   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:18.239051   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:20.239294   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:22.239715   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:24.738810   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:26.739034   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:28.739364   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:30.739770   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:33.239063   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:35.239425   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:37.239744   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:39.739151   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:41.739685   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:44.239046   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:46.239503   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:48.738957   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:50.739269   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:52.739747   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:55.239459   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:57.739152   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:59.739697   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:02.238935   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:04.239446   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:06.738747   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:08.738816   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:10.738937   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:12.739182   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:14.739698   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:17.238816   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:19.239006   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:21.239256   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:23.239603   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:25.739083   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:28.238903   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:30.239210   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:32.239740   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:34.738942   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:36.739260   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:38.739610   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:41.238773   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:43.239024   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:45.239316   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:47.239690   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:49.738813   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:52.238811   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:54.238890   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:56.239083   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:58.239334   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:00.239577   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:02.738811   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:04.739001   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:06.739758   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:09.239542   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:11.239643   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:13.738883   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:15.739070   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:17.739168   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:19.739551   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:22.238767   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:24.238867   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:26.239004   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:28.239235   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:30.239573   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:32.738879   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:34.738922   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:36.739197   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:38.739499   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:40.739749   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:43.238901   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:45.239199   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:47.239460   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:49.738763   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:51.738856   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:53.739191   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:47:55.238470   78109 node_ready.go:38] duration metric: took 6m0.000189393s for node "ha-422561" to be "Ready" ...
	I1003 18:47:55.241057   78109 out.go:203] 
	W1003 18:47:55.242227   78109 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1003 18:47:55.242242   78109 out.go:285] * 
	W1003 18:47:55.243958   78109 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:47:55.245321   78109 out.go:203] 
	
	
	==> CRI-O <==
	Oct 03 18:47:47 ha-422561 crio[515]: time="2025-10-03T18:47:47.403076329Z" level=info msg="createCtr: removing container a42340affc3dc1d7a7857706c661f39ed18d69be895ad389bfaa31213cdb8268" id=ffe0e08a-bdbb-475b-b927-415f00674390 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:47 ha-422561 crio[515]: time="2025-10-03T18:47:47.403110206Z" level=info msg="createCtr: deleting container a42340affc3dc1d7a7857706c661f39ed18d69be895ad389bfaa31213cdb8268 from storage" id=ffe0e08a-bdbb-475b-b927-415f00674390 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:47 ha-422561 crio[515]: time="2025-10-03T18:47:47.404950235Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-422561_kube-system_e643a03771f1e72f527532eff2c66a9c_0" id=ffe0e08a-bdbb-475b-b927-415f00674390 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:54 ha-422561 crio[515]: time="2025-10-03T18:47:54.379748398Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=90d0d05e-00c4-4a59-9dce-7cb1a0f28e4d name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:47:54 ha-422561 crio[515]: time="2025-10-03T18:47:54.38052727Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=11fb8911-3d19-4cb4-a6e8-a5edee3b070d name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:47:54 ha-422561 crio[515]: time="2025-10-03T18:47:54.381440908Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-422561/kube-scheduler" id=3ea7a0d1-930c-44dd-880d-64bc8d5be6ed name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:54 ha-422561 crio[515]: time="2025-10-03T18:47:54.381662129Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:47:54 ha-422561 crio[515]: time="2025-10-03T18:47:54.384933881Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:47:54 ha-422561 crio[515]: time="2025-10-03T18:47:54.385353711Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:47:54 ha-422561 crio[515]: time="2025-10-03T18:47:54.40180458Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=3ea7a0d1-930c-44dd-880d-64bc8d5be6ed name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:54 ha-422561 crio[515]: time="2025-10-03T18:47:54.403182084Z" level=info msg="createCtr: deleting container ID 7a611c0ab3283f6b99fed2effb7b8b0720a0984e0305374658f7f096c85882bf from idIndex" id=3ea7a0d1-930c-44dd-880d-64bc8d5be6ed name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:54 ha-422561 crio[515]: time="2025-10-03T18:47:54.403215572Z" level=info msg="createCtr: removing container 7a611c0ab3283f6b99fed2effb7b8b0720a0984e0305374658f7f096c85882bf" id=3ea7a0d1-930c-44dd-880d-64bc8d5be6ed name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:54 ha-422561 crio[515]: time="2025-10-03T18:47:54.403245417Z" level=info msg="createCtr: deleting container 7a611c0ab3283f6b99fed2effb7b8b0720a0984e0305374658f7f096c85882bf from storage" id=3ea7a0d1-930c-44dd-880d-64bc8d5be6ed name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:54 ha-422561 crio[515]: time="2025-10-03T18:47:54.405319115Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-422561_kube-system_2640157afe5e174d7402164688eed7be_0" id=3ea7a0d1-930c-44dd-880d-64bc8d5be6ed name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:55 ha-422561 crio[515]: time="2025-10-03T18:47:55.380490636Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=b2d90baf-ea14-4c65-9df7-911798bba832 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:47:55 ha-422561 crio[515]: time="2025-10-03T18:47:55.381483401Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=5e9ae564-4413-4905-924f-6780394f1541 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:47:55 ha-422561 crio[515]: time="2025-10-03T18:47:55.382474829Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-422561/kube-apiserver" id=f36d561c-4d04-4e3b-9334-b995b3fac21a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:55 ha-422561 crio[515]: time="2025-10-03T18:47:55.382703016Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:47:55 ha-422561 crio[515]: time="2025-10-03T18:47:55.386451823Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:47:55 ha-422561 crio[515]: time="2025-10-03T18:47:55.387079268Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:47:55 ha-422561 crio[515]: time="2025-10-03T18:47:55.401047494Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=f36d561c-4d04-4e3b-9334-b995b3fac21a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:55 ha-422561 crio[515]: time="2025-10-03T18:47:55.402408145Z" level=info msg="createCtr: deleting container ID d43432541d16db3ee5a96e53eacae47265812a2f006f9589780e72f155ba6ff4 from idIndex" id=f36d561c-4d04-4e3b-9334-b995b3fac21a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:55 ha-422561 crio[515]: time="2025-10-03T18:47:55.402441362Z" level=info msg="createCtr: removing container d43432541d16db3ee5a96e53eacae47265812a2f006f9589780e72f155ba6ff4" id=f36d561c-4d04-4e3b-9334-b995b3fac21a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:55 ha-422561 crio[515]: time="2025-10-03T18:47:55.40247631Z" level=info msg="createCtr: deleting container d43432541d16db3ee5a96e53eacae47265812a2f006f9589780e72f155ba6ff4 from storage" id=f36d561c-4d04-4e3b-9334-b995b3fac21a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:55 ha-422561 crio[515]: time="2025-10-03T18:47:55.40458633Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-422561_kube-system_6ecf19dd95945fcfeaff027fad95c1ee_0" id=f36d561c-4d04-4e3b-9334-b995b3fac21a name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:47:56.203670    2006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:47:56.204228    2006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:47:56.205702    2006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:47:56.206137    2006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:47:56.207658    2006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 3 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001870] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.374530] i8042: Warning: Keylock active
	[  +0.010846] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003424] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000658] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000637] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000691] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.479345] block sda: the capability attribute has been deprecated.
	[  +0.086934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025583] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.992810] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:47:56 up  1:30,  0 user,  load average: 0.10, 0.09, 0.08
	Linux ha-422561 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 03 18:47:47 ha-422561 kubelet[666]: E1003 18:47:47.405331     666 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:47:47 ha-422561 kubelet[666]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-422561_kube-system(e643a03771f1e72f527532eff2c66a9c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:47:47 ha-422561 kubelet[666]:  > logger="UnhandledError"
	Oct 03 18:47:47 ha-422561 kubelet[666]: E1003 18:47:47.405363     666 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-422561" podUID="e643a03771f1e72f527532eff2c66a9c"
	Oct 03 18:47:49 ha-422561 kubelet[666]: E1003 18:47:49.578264     666 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-422561.186b0f4fb15ee27f  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-422561,UID:ha-422561,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-422561 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-422561,},FirstTimestamp:2025-10-03 18:41:54.370929279 +0000 UTC m=+0.067698676,LastTimestamp:2025-10-03 18:41:54.370929279 +0000 UTC m=+0.067698676,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-422561,}"
	Oct 03 18:47:50 ha-422561 kubelet[666]: E1003 18:47:50.019054     666 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-422561?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 03 18:47:50 ha-422561 kubelet[666]: I1003 18:47:50.179733     666 kubelet_node_status.go:75] "Attempting to register node" node="ha-422561"
	Oct 03 18:47:50 ha-422561 kubelet[666]: E1003 18:47:50.180147     666 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-422561"
	Oct 03 18:47:54 ha-422561 kubelet[666]: E1003 18:47:54.379374     666 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:47:54 ha-422561 kubelet[666]: E1003 18:47:54.398674     666 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-422561\" not found"
	Oct 03 18:47:54 ha-422561 kubelet[666]: E1003 18:47:54.405609     666 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:47:54 ha-422561 kubelet[666]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:47:54 ha-422561 kubelet[666]:  > podSandboxID="298774dbde189264a91a70e9924dc14a9e982805072e972c661c4befd3434c47"
	Oct 03 18:47:54 ha-422561 kubelet[666]: E1003 18:47:54.405711     666 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:47:54 ha-422561 kubelet[666]:         container kube-scheduler start failed in pod kube-scheduler-ha-422561_kube-system(2640157afe5e174d7402164688eed7be): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:47:54 ha-422561 kubelet[666]:  > logger="UnhandledError"
	Oct 03 18:47:54 ha-422561 kubelet[666]: E1003 18:47:54.405741     666 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-422561" podUID="2640157afe5e174d7402164688eed7be"
	Oct 03 18:47:55 ha-422561 kubelet[666]: E1003 18:47:55.379934     666 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:47:55 ha-422561 kubelet[666]: E1003 18:47:55.404877     666 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:47:55 ha-422561 kubelet[666]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:47:55 ha-422561 kubelet[666]:  > podSandboxID="434e99892ed1ce020750fc9407c91781adb3934c186862bfb34a22205e5e14f9"
	Oct 03 18:47:55 ha-422561 kubelet[666]: E1003 18:47:55.405020     666 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:47:55 ha-422561 kubelet[666]:         container kube-apiserver start failed in pod kube-apiserver-ha-422561_kube-system(6ecf19dd95945fcfeaff027fad95c1ee): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:47:55 ha-422561 kubelet[666]:  > logger="UnhandledError"
	Oct 03 18:47:55 ha-422561 kubelet[666]: E1003 18:47:55.405056     666 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-422561" podUID="6ecf19dd95945fcfeaff027fad95c1ee"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561: exit status 2 (295.626065ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-422561" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (369.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (1.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 node delete m03 --alsologtostderr -v 5: exit status 103 (254.223854ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-422561 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-422561"

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:47:56.642027   82184 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:47:56.642290   82184 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:47:56.642301   82184 out.go:374] Setting ErrFile to fd 2...
	I1003 18:47:56.642307   82184 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:47:56.642493   82184 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:47:56.642781   82184 mustload.go:65] Loading cluster: ha-422561
	I1003 18:47:56.643111   82184 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:47:56.643496   82184 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:47:56.661252   82184 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:47:56.661478   82184 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:47:56.714527   82184 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-03 18:47:56.704357908 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:47:56.714645   82184 api_server.go:166] Checking apiserver status ...
	I1003 18:47:56.714707   82184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:47:56.714756   82184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:47:56.731267   82184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	W1003 18:47:56.833494   82184 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:47:56.835477   82184 out.go:179] * The control-plane node ha-422561 apiserver is not running: (state=Stopped)
	I1003 18:47:56.836798   82184 out.go:179]   To start a cluster, run: "minikube start -p ha-422561"

                                                
                                                
** /stderr **
ha_test.go:491: node delete returned an error. args "out/minikube-linux-amd64 -p ha-422561 node delete m03 --alsologtostderr -v 5": exit status 103
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 status --alsologtostderr -v 5: exit status 2 (288.409398ms)

                                                
                                                
-- stdout --
	ha-422561
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:47:56.893177   82279 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:47:56.893435   82279 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:47:56.893445   82279 out.go:374] Setting ErrFile to fd 2...
	I1003 18:47:56.893451   82279 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:47:56.893643   82279 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:47:56.893851   82279 out.go:368] Setting JSON to false
	I1003 18:47:56.893890   82279 mustload.go:65] Loading cluster: ha-422561
	I1003 18:47:56.893982   82279 notify.go:220] Checking for updates...
	I1003 18:47:56.894343   82279 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:47:56.894361   82279 status.go:174] checking status of ha-422561 ...
	I1003 18:47:56.894854   82279 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:47:56.912225   82279 status.go:371] ha-422561 host status = "Running" (err=<nil>)
	I1003 18:47:56.912268   82279 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:47:56.912555   82279 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:47:56.929146   82279 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:47:56.929428   82279 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:47:56.929485   82279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:47:56.947117   82279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:47:57.043937   82279 ssh_runner.go:195] Run: systemctl --version
	I1003 18:47:57.050205   82279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:47:57.061845   82279 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:47:57.115885   82279 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-03 18:47:57.106030815 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:47:57.116416   82279 kubeconfig.go:125] found "ha-422561" server: "https://192.168.49.2:8443"
	I1003 18:47:57.116443   82279 api_server.go:166] Checking apiserver status ...
	I1003 18:47:57.116474   82279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1003 18:47:57.125971   82279 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:47:57.126005   82279 status.go:463] ha-422561 apiserver status = Running (err=<nil>)
	I1003 18:47:57.126017   82279 status.go:176] ha-422561 status: &{Name:ha-422561 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-422561 status --alsologtostderr -v 5" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-422561
helpers_test.go:243: (dbg) docker inspect ha-422561:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	        "Created": "2025-10-03T18:31:00.396132938Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 78305,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T18:41:48.184631345Z",
	            "FinishedAt": "2025-10-03T18:41:47.03312274Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hostname",
	        "HostsPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hosts",
	        "LogPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512-json.log",
	        "Name": "/ha-422561",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-422561:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-422561",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	                "LowerDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94-init/diff:/var/lib/docker/overlay2/6a517a7375440eba803d7b83fe1e0821915758396dd4d8556ab64fff322a60c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-422561",
	                "Source": "/var/lib/docker/volumes/ha-422561/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-422561",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-422561",
	                "name.minikube.sigs.k8s.io": "ha-422561",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f25e24c6846c3066ef61f48e15ea0bd5d93f4d074a9989652f5f017953ae54f4",
	            "SandboxKey": "/var/run/docker/netns/f25e24c6846c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-422561": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:25:3d:05:0c:10",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de6aa7ca29f453c0d15cb280abde7ee215f554c89e78e3db8a0f7590468114b5",
	                    "EndpointID": "ea9c702790bd5592b9af12355b48fa038276e1385318d9f8348f8ea08c72f59c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-422561",
	                        "eef8fc426b2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561: exit status 2 (293.758878ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                    ARGS                                     │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-422561 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml            │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- rollout status deployment/busybox                      │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.io                        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.default                   │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node add --alsologtostderr -v 5                                   │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node stop m02 --alsologtostderr -v 5                              │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node start m02 --alsologtostderr -v 5                             │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:41 UTC │                     │
	│ node    │ ha-422561 node list --alsologtostderr -v 5                                  │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:41 UTC │                     │
	│ stop    │ ha-422561 stop --alsologtostderr -v 5                                       │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:41 UTC │ 03 Oct 25 18:41 UTC │
	│ start   │ ha-422561 start --wait true --alsologtostderr -v 5                          │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:41 UTC │                     │
	│ node    │ ha-422561 node list --alsologtostderr -v 5                                  │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:47 UTC │                     │
	│ node    │ ha-422561 node delete m03 --alsologtostderr -v 5                            │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:47 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:41:47
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:41:47.965617   78109 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:41:47.965729   78109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:41:47.965734   78109 out.go:374] Setting ErrFile to fd 2...
	I1003 18:41:47.965738   78109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:41:47.965965   78109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:41:47.966407   78109 out.go:368] Setting JSON to false
	I1003 18:41:47.967236   78109 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5059,"bootTime":1759511849,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:41:47.967316   78109 start.go:140] virtualization: kvm guest
	I1003 18:41:47.969565   78109 out.go:179] * [ha-422561] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 18:41:47.970895   78109 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:41:47.970886   78109 notify.go:220] Checking for updates...
	I1003 18:41:47.973237   78109 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:41:47.974502   78109 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:41:47.976050   78109 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:41:47.980621   78109 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:41:47.982098   78109 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:41:47.983693   78109 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:41:47.983786   78109 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:41:48.006894   78109 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:41:48.006973   78109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:41:48.059814   78109 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:41:48.049141525 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:41:48.059970   78109 docker.go:318] overlay module found
	I1003 18:41:48.061805   78109 out.go:179] * Using the docker driver based on existing profile
	I1003 18:41:48.063100   78109 start.go:304] selected driver: docker
	I1003 18:41:48.063116   78109 start.go:924] validating driver "docker" against &{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:41:48.063193   78109 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:41:48.063271   78109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:41:48.115735   78109 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:41:48.106263176 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:41:48.116398   78109 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:41:48.116429   78109 cni.go:84] Creating CNI manager for ""
	I1003 18:41:48.116479   78109 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 18:41:48.116522   78109 start.go:348] cluster config:
	{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1003 18:41:48.118414   78109 out.go:179] * Starting "ha-422561" primary control-plane node in "ha-422561" cluster
	I1003 18:41:48.119473   78109 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:41:48.120615   78109 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:41:48.121657   78109 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:41:48.121692   78109 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 18:41:48.121702   78109 cache.go:58] Caching tarball of preloaded images
	I1003 18:41:48.121752   78109 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:41:48.121806   78109 preload.go:233] Found /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1003 18:41:48.121822   78109 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 18:41:48.121972   78109 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:41:48.141259   78109 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 18:41:48.141277   78109 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 18:41:48.141293   78109 cache.go:232] Successfully downloaded all kic artifacts
	I1003 18:41:48.141322   78109 start.go:360] acquireMachinesLock for ha-422561: {Name:mk32fd04a5d9b5f89831583bab7d7527f4d187a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:41:48.141381   78109 start.go:364] duration metric: took 38.503µs to acquireMachinesLock for "ha-422561"
	I1003 18:41:48.141404   78109 start.go:96] Skipping create...Using existing machine configuration
	I1003 18:41:48.141413   78109 fix.go:54] fixHost starting: 
	I1003 18:41:48.141623   78109 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:41:48.158697   78109 fix.go:112] recreateIfNeeded on ha-422561: state=Stopped err=<nil>
	W1003 18:41:48.158732   78109 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 18:41:48.160525   78109 out.go:252] * Restarting existing docker container for "ha-422561" ...
	I1003 18:41:48.160596   78109 cli_runner.go:164] Run: docker start ha-422561
	I1003 18:41:48.389421   78109 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:41:48.408957   78109 kic.go:430] container "ha-422561" state is running.
	I1003 18:41:48.409388   78109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:41:48.427176   78109 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:41:48.427382   78109 machine.go:93] provisionDockerMachine start ...
	I1003 18:41:48.427434   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:48.444729   78109 main.go:141] libmachine: Using SSH client type: native
	I1003 18:41:48.444951   78109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1003 18:41:48.444963   78109 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 18:41:48.445521   78109 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57550->127.0.0.1:32788: read: connection reset by peer
	I1003 18:41:51.588813   78109 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:41:51.588840   78109 ubuntu.go:182] provisioning hostname "ha-422561"
	I1003 18:41:51.588902   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:51.606073   78109 main.go:141] libmachine: Using SSH client type: native
	I1003 18:41:51.606334   78109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1003 18:41:51.606352   78109 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-422561 && echo "ha-422561" | sudo tee /etc/hostname
	I1003 18:41:51.755889   78109 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:41:51.755972   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:51.773186   78109 main.go:141] libmachine: Using SSH client type: native
	I1003 18:41:51.773469   78109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1003 18:41:51.773496   78109 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422561' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422561/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422561' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:41:51.915364   78109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:41:51.915397   78109 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8669/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8669/.minikube}
	I1003 18:41:51.915442   78109 ubuntu.go:190] setting up certificates
	I1003 18:41:51.915453   78109 provision.go:84] configureAuth start
	I1003 18:41:51.915501   78109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:41:51.932304   78109 provision.go:143] copyHostCerts
	I1003 18:41:51.932336   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:41:51.932369   78109 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem, removing ...
	I1003 18:41:51.932384   78109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:41:51.932460   78109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem (1675 bytes)
	I1003 18:41:51.932569   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:41:51.932592   78109 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem, removing ...
	I1003 18:41:51.932601   78109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:41:51.932644   78109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem (1082 bytes)
	I1003 18:41:51.932737   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:41:51.932762   78109 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem, removing ...
	I1003 18:41:51.932770   78109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:41:51.932806   78109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem (1123 bytes)
	I1003 18:41:51.932897   78109 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem org=jenkins.ha-422561 san=[127.0.0.1 192.168.49.2 ha-422561 localhost minikube]
	I1003 18:41:52.334530   78109 provision.go:177] copyRemoteCerts
	I1003 18:41:52.334597   78109 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:41:52.334648   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:52.352292   78109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:52.453048   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 18:41:52.453101   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 18:41:52.469816   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 18:41:52.469876   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1003 18:41:52.486010   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 18:41:52.486070   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 18:41:52.501699   78109 provision.go:87] duration metric: took 586.232853ms to configureAuth
	I1003 18:41:52.501734   78109 ubuntu.go:206] setting minikube options for container-runtime
	I1003 18:41:52.501896   78109 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:41:52.502010   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:52.519621   78109 main.go:141] libmachine: Using SSH client type: native
	I1003 18:41:52.519864   78109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1003 18:41:52.519881   78109 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 18:41:52.769003   78109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 18:41:52.769026   78109 machine.go:96] duration metric: took 4.34163143s to provisionDockerMachine
	I1003 18:41:52.769048   78109 start.go:293] postStartSetup for "ha-422561" (driver="docker")
	I1003 18:41:52.769058   78109 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:41:52.769105   78109 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:41:52.769141   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:52.785506   78109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:52.886607   78109 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:41:52.890099   78109 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:41:52.890126   78109 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 18:41:52.890138   78109 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/addons for local assets ...
	I1003 18:41:52.890200   78109 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/files for local assets ...
	I1003 18:41:52.890302   78109 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> 122122.pem in /etc/ssl/certs
	I1003 18:41:52.890314   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /etc/ssl/certs/122122.pem
	I1003 18:41:52.890418   78109 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 18:41:52.897610   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:41:52.913799   78109 start.go:296] duration metric: took 144.73798ms for postStartSetup
	I1003 18:41:52.913880   78109 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:41:52.913916   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:52.931323   78109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:53.028846   78109 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:41:53.033147   78109 fix.go:56] duration metric: took 4.891729968s for fixHost
	I1003 18:41:53.033174   78109 start.go:83] releasing machines lock for "ha-422561", held for 4.891773851s
	I1003 18:41:53.033222   78109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:41:53.050737   78109 ssh_runner.go:195] Run: cat /version.json
	I1003 18:41:53.050798   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:53.050812   78109 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:41:53.050904   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:53.068768   78109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:53.069109   78109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:53.215897   78109 ssh_runner.go:195] Run: systemctl --version
	I1003 18:41:53.222143   78109 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 18:41:53.254998   78109 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 18:41:53.259516   78109 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 18:41:53.259571   78109 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 18:41:53.267402   78109 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1003 18:41:53.267422   78109 start.go:495] detecting cgroup driver to use...
	I1003 18:41:53.267447   78109 detect.go:190] detected "systemd" cgroup driver on host os
	I1003 18:41:53.267478   78109 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 18:41:53.280584   78109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 18:41:53.291928   78109 docker.go:218] disabling cri-docker service (if available) ...
	I1003 18:41:53.292007   78109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 18:41:53.305410   78109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 18:41:53.316686   78109 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 18:41:53.392708   78109 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 18:41:53.468550   78109 docker.go:234] disabling docker service ...
	I1003 18:41:53.468603   78109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 18:41:53.481912   78109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 18:41:53.493296   78109 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 18:41:53.564617   78109 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 18:41:53.641361   78109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 18:41:53.653265   78109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:41:53.666452   78109 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 18:41:53.666512   78109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:41:53.674871   78109 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 18:41:53.674918   78109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:41:53.682900   78109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:41:53.690672   78109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:41:53.698507   78109 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:41:53.705820   78109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:41:53.714091   78109 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:41:53.721884   78109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:41:53.729698   78109 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:41:53.736355   78109 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:41:53.743414   78109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:41:53.819717   78109 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 18:41:53.919600   78109 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 18:41:53.919651   78109 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 18:41:53.923478   78109 start.go:563] Will wait 60s for crictl version
	I1003 18:41:53.923531   78109 ssh_runner.go:195] Run: which crictl
	I1003 18:41:53.926886   78109 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 18:41:53.950693   78109 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 18:41:53.950780   78109 ssh_runner.go:195] Run: crio --version
	I1003 18:41:53.978079   78109 ssh_runner.go:195] Run: crio --version
	I1003 18:41:54.006095   78109 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 18:41:54.007432   78109 cli_runner.go:164] Run: docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:41:54.024727   78109 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 18:41:54.028676   78109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:41:54.038280   78109 kubeadm.go:883] updating cluster {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 18:41:54.038374   78109 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:41:54.038416   78109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:41:54.069216   78109 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:41:54.069235   78109 crio.go:433] Images already preloaded, skipping extraction
	I1003 18:41:54.069278   78109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:41:54.093835   78109 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:41:54.093853   78109 cache_images.go:85] Images are preloaded, skipping loading
	I1003 18:41:54.093861   78109 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1003 18:41:54.093958   78109 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422561 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 18:41:54.094039   78109 ssh_runner.go:195] Run: crio config
	I1003 18:41:54.139191   78109 cni.go:84] Creating CNI manager for ""
	I1003 18:41:54.139209   78109 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 18:41:54.139225   78109 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 18:41:54.139251   78109 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-422561 NodeName:ha-422561 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 18:41:54.139393   78109 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-422561"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:41:54.139467   78109 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 18:41:54.147298   78109 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:41:54.147347   78109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 18:41:54.154482   78109 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1003 18:41:54.165970   78109 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 18:41:54.177461   78109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1003 18:41:54.189120   78109 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1003 18:41:54.192398   78109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:41:54.201452   78109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:41:54.277696   78109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:41:54.301361   78109 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561 for IP: 192.168.49.2
	I1003 18:41:54.301380   78109 certs.go:195] generating shared ca certs ...
	I1003 18:41:54.301396   78109 certs.go:227] acquiring lock for ca certs: {Name:mk92d1e8e469cb44d9924ff8abf5ecf0a8ce4e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:41:54.301531   78109 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key
	I1003 18:41:54.301567   78109 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key
	I1003 18:41:54.301574   78109 certs.go:257] generating profile certs ...
	I1003 18:41:54.301678   78109 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key
	I1003 18:41:54.301704   78109 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2ce2e456
	I1003 18:41:54.301719   78109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2ce2e456 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1003 18:41:54.485656   78109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2ce2e456 ...
	I1003 18:41:54.485682   78109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2ce2e456: {Name:mkd64166271c8ed4363a27c4beb22c76efb402ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:41:54.485857   78109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2ce2e456 ...
	I1003 18:41:54.485874   78109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2ce2e456: {Name:mk21609dadb3006e0ff5fcda633cac720af9cd26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:41:54.485999   78109 certs.go:382] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2ce2e456 -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt
	I1003 18:41:54.486165   78109 certs.go:386] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2ce2e456 -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key
	I1003 18:41:54.486296   78109 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key
	I1003 18:41:54.486314   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 18:41:54.486329   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 18:41:54.486342   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 18:41:54.486355   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 18:41:54.486366   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 18:41:54.486378   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 18:41:54.486390   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 18:41:54.486400   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 18:41:54.486447   78109 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem (1338 bytes)
	W1003 18:41:54.486488   78109 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212_empty.pem, impossibly tiny 0 bytes
	I1003 18:41:54.486499   78109 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:41:54.486520   78109 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:41:54.486541   78109 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:41:54.486562   78109 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem (1675 bytes)
	I1003 18:41:54.486601   78109 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:41:54.486625   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /usr/share/ca-certificates/122122.pem
	I1003 18:41:54.486639   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:41:54.486651   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem -> /usr/share/ca-certificates/12212.pem
	I1003 18:41:54.487214   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:41:54.504245   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 18:41:54.520954   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:41:54.537040   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:41:54.552996   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1003 18:41:54.568727   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:41:54.584994   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:41:54.600897   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 18:41:54.616824   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /usr/share/ca-certificates/122122.pem (1708 bytes)
	I1003 18:41:54.632722   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:41:54.648244   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem --> /usr/share/ca-certificates/12212.pem (1338 bytes)
	I1003 18:41:54.663803   78109 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:41:54.675418   78109 ssh_runner.go:195] Run: openssl version
	I1003 18:41:54.681349   78109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122122.pem && ln -fs /usr/share/ca-certificates/122122.pem /etc/ssl/certs/122122.pem"
	I1003 18:41:54.689100   78109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122122.pem
	I1003 18:41:54.692442   78109 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:41:54.692485   78109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122122.pem
	I1003 18:41:54.725859   78109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122122.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:41:54.733505   78109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:41:54.741265   78109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:41:54.744606   78109 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:41:54.744646   78109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:41:54.777788   78109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:41:54.785887   78109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12212.pem && ln -fs /usr/share/ca-certificates/12212.pem /etc/ssl/certs/12212.pem"
	I1003 18:41:54.795297   78109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12212.pem
	I1003 18:41:54.799237   78109 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:41:54.799288   78109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12212.pem
	I1003 18:41:54.846396   78109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12212.pem /etc/ssl/certs/51391683.0"
	I1003 18:41:54.855755   78109 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:41:54.860752   78109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 18:41:54.896634   78109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 18:41:54.930605   78109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 18:41:54.965096   78109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 18:41:54.998440   78109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 18:41:55.031641   78109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 18:41:55.065037   78109 kubeadm.go:400] StartCluster: {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:41:55.065123   78109 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:41:55.065170   78109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:41:55.091392   78109 cri.go:89] found id: ""
	I1003 18:41:55.091469   78109 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:41:55.099200   78109 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1003 18:41:55.099217   78109 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1003 18:41:55.099258   78109 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 18:41:55.106032   78109 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:41:55.106375   78109 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:41:55.106505   78109 kubeconfig.go:62] /home/jenkins/minikube-integration/21625-8669/kubeconfig needs updating (will repair): [kubeconfig missing "ha-422561" cluster setting kubeconfig missing "ha-422561" context setting]
	I1003 18:41:55.106770   78109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/kubeconfig: {Name:mk6b7939515483ba69c1f358a3a21494f4ead7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:41:55.107315   78109 kapi.go:59] client config for ha-422561: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key", CAFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 18:41:55.107724   78109 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1003 18:41:55.107739   78109 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1003 18:41:55.107743   78109 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1003 18:41:55.107747   78109 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1003 18:41:55.107750   78109 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1003 18:41:55.107810   78109 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1003 18:41:55.108143   78109 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 18:41:55.114940   78109 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1003 18:41:55.114964   78109 kubeadm.go:601] duration metric: took 15.74296ms to restartPrimaryControlPlane
	I1003 18:41:55.114971   78109 kubeadm.go:402] duration metric: took 49.946332ms to StartCluster
	I1003 18:41:55.115005   78109 settings.go:142] acquiring lock: {Name:mk6bc950503a8f341b8aacc07a8bc72d5db3a25c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:41:55.115056   78109 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:41:55.115531   78109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/kubeconfig: {Name:mk6b7939515483ba69c1f358a3a21494f4ead7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:41:55.115741   78109 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 18:41:55.115824   78109 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 18:41:55.115919   78109 addons.go:69] Setting storage-provisioner=true in profile "ha-422561"
	I1003 18:41:55.115938   78109 addons.go:238] Setting addon storage-provisioner=true in "ha-422561"
	I1003 18:41:55.115942   78109 addons.go:69] Setting default-storageclass=true in profile "ha-422561"
	I1003 18:41:55.115958   78109 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:41:55.115972   78109 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-422561"
	I1003 18:41:55.115989   78109 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:41:55.116225   78109 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:41:55.116452   78109 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:41:55.119136   78109 out.go:179] * Verifying Kubernetes components...
	I1003 18:41:55.120238   78109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:41:55.134787   78109 kapi.go:59] client config for ha-422561: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key", CAFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 18:41:55.135133   78109 addons.go:238] Setting addon default-storageclass=true in "ha-422561"
	I1003 18:41:55.135168   78109 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:41:55.135538   78109 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:41:55.137640   78109 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 18:41:55.138668   78109 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:41:55.138683   78109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 18:41:55.138728   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:55.162278   78109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:55.162572   78109 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 18:41:55.162597   78109 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 18:41:55.163241   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:55.182395   78109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:55.225739   78109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:41:55.238233   78109 node_ready.go:35] waiting up to 6m0s for node "ha-422561" to be "Ready" ...
	I1003 18:41:55.270587   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:41:55.287076   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:41:55.326555   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.326612   78109 retry.go:31] will retry after 238.443182ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:41:55.340406   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.340437   78109 retry.go:31] will retry after 153.323458ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.494856   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:41:55.546128   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.546164   78109 retry.go:31] will retry after 276.912874ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.565279   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:41:55.615128   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.615158   78109 retry.go:31] will retry after 342.439843ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.823993   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:41:55.875529   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.875561   78109 retry.go:31] will retry after 400.772518ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.957790   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:41:56.007576   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:56.007610   78109 retry.go:31] will retry after 687.440576ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:56.276587   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:41:56.327516   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:56.327545   78109 retry.go:31] will retry after 708.287937ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:56.696027   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:41:56.746649   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:56.746684   78109 retry.go:31] will retry after 518.211932ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:57.036088   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:41:57.086704   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:57.086738   78109 retry.go:31] will retry after 1.376791265s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:41:57.239372   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:41:57.265499   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:41:57.317068   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:57.317108   78109 retry.go:31] will retry after 1.177919083s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:58.464531   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:41:58.496033   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:41:58.515496   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:58.515532   78109 retry.go:31] will retry after 2.33145046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:41:58.546625   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:58.546674   78109 retry.go:31] will retry after 1.629869087s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:41:59.239446   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:00.176874   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:42:00.227112   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:00.227140   78109 retry.go:31] will retry after 3.908061892s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:00.847842   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:42:00.898437   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:00.898463   78109 retry.go:31] will retry after 4.123747597s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:01.739288   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:04.135743   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:42:04.186702   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:04.186732   78109 retry.go:31] will retry after 3.995977252s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:04.239305   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:05.022578   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:42:05.073779   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:05.073811   78109 retry.go:31] will retry after 4.388328001s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:06.738802   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:08.183159   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:42:08.234120   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:08.234149   78109 retry.go:31] will retry after 3.547774861s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:08.739679   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:09.463080   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:42:09.513268   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:09.513304   78109 retry.go:31] will retry after 8.911463673s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:11.238822   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:11.782937   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:42:11.834357   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:11.834385   78109 retry.go:31] will retry after 8.693528714s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:13.239500   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:15.239549   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:17.739446   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:18.424887   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:42:18.475151   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:18.475186   78109 retry.go:31] will retry after 7.904227635s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:20.239011   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:20.528449   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:42:20.580777   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:20.580809   78109 retry.go:31] will retry after 20.11601788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:22.738834   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:24.739199   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:26.379921   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:42:26.431319   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:26.431348   78109 retry.go:31] will retry after 20.573768491s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:27.239280   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:29.738800   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:31.739121   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:33.739413   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:36.238812   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:38.238926   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:40.239194   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:40.697768   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:42:40.749547   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:40.749578   78109 retry.go:31] will retry after 30.248373016s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:42.239773   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:44.739009   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:46.739534   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:47.005919   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:42:47.057465   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:47.057491   78109 retry.go:31] will retry after 12.288685106s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:49.239699   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:51.739043   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:53.739508   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:56.238897   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:58.239429   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:59.346896   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:42:59.401998   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:59.402035   78109 retry.go:31] will retry after 35.671655983s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:43:00.239715   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:02.239754   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:04.239822   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:06.739643   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:08.739717   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:43:10.998923   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:43:11.051273   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:43:11.051306   78109 retry.go:31] will retry after 26.001187567s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:43:11.238952   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:13.239575   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:15.738938   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:17.739263   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:20.238878   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:22.239089   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:24.239684   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:26.738761   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:28.738919   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:30.739086   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:32.739359   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:34.739740   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:43:35.074123   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:43:35.124772   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:43:35.124883   78109 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1003 18:43:37.053183   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:43:37.104592   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:43:37.104700   78109 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1003 18:43:37.106533   78109 out.go:179] * Enabled addons: 
	I1003 18:43:37.107764   78109 addons.go:514] duration metric: took 1m41.991949037s for enable addons: enabled=[]
	W1003 18:43:37.239332   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:39.738898   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:42.238941   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:44.239082   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:46.239268   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:48.239582   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:50.738800   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:52.738881   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:54.739056   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:57.239071   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:59.239207   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:01.239478   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:03.738847   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:05.739101   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:07.739198   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:09.739482   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:12.238792   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:14.238963   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:16.239203   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:18.239564   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:20.738823   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:22.738917   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:24.739018   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:26.739400   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:28.739723   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:31.238840   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:33.239009   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:35.239259   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:37.239746   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:39.739042   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:41.739269   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:43.739600   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:46.238810   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:48.238919   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:50.239098   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:52.739028   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:54.739369   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:56.739650   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:59.238815   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:01.238933   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:03.239302   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:05.239684   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:07.738918   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:09.739100   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:11.739372   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:13.739747   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:16.238933   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:18.239051   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:20.239294   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:22.239715   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:24.738810   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:26.739034   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:28.739364   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:30.739770   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:33.239063   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:35.239425   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:37.239744   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:39.739151   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:41.739685   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:44.239046   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:46.239503   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:48.738957   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:50.739269   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:52.739747   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:55.239459   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:57.739152   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:59.739697   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:02.238935   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:04.239446   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:06.738747   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:08.738816   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:10.738937   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:12.739182   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:14.739698   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:17.238816   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:19.239006   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:21.239256   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:23.239603   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:25.739083   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:28.238903   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:30.239210   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:32.239740   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:34.738942   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:36.739260   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:38.739610   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:41.238773   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:43.239024   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:45.239316   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:47.239690   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:49.738813   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:52.238811   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:54.238890   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:56.239083   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:58.239334   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:00.239577   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:02.738811   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:04.739001   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:06.739758   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:09.239542   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:11.239643   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:13.738883   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:15.739070   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:17.739168   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:19.739551   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:22.238767   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:24.238867   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:26.239004   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:28.239235   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:30.239573   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:32.738879   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:34.738922   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:36.739197   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:38.739499   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:40.739749   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:43.238901   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:45.239199   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:47.239460   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:49.738763   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:51.738856   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:53.739191   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:47:55.238470   78109 node_ready.go:38] duration metric: took 6m0.000189393s for node "ha-422561" to be "Ready" ...
	I1003 18:47:55.241057   78109 out.go:203] 
	W1003 18:47:55.242227   78109 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1003 18:47:55.242242   78109 out.go:285] * 
	W1003 18:47:55.243958   78109 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:47:55.245321   78109 out.go:203] 
	
	
	==> CRI-O <==
	Oct 03 18:47:47 ha-422561 crio[515]: time="2025-10-03T18:47:47.403076329Z" level=info msg="createCtr: removing container a42340affc3dc1d7a7857706c661f39ed18d69be895ad389bfaa31213cdb8268" id=ffe0e08a-bdbb-475b-b927-415f00674390 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:47 ha-422561 crio[515]: time="2025-10-03T18:47:47.403110206Z" level=info msg="createCtr: deleting container a42340affc3dc1d7a7857706c661f39ed18d69be895ad389bfaa31213cdb8268 from storage" id=ffe0e08a-bdbb-475b-b927-415f00674390 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:47 ha-422561 crio[515]: time="2025-10-03T18:47:47.404950235Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-422561_kube-system_e643a03771f1e72f527532eff2c66a9c_0" id=ffe0e08a-bdbb-475b-b927-415f00674390 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:54 ha-422561 crio[515]: time="2025-10-03T18:47:54.379748398Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=90d0d05e-00c4-4a59-9dce-7cb1a0f28e4d name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:47:54 ha-422561 crio[515]: time="2025-10-03T18:47:54.38052727Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=11fb8911-3d19-4cb4-a6e8-a5edee3b070d name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:47:54 ha-422561 crio[515]: time="2025-10-03T18:47:54.381440908Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-422561/kube-scheduler" id=3ea7a0d1-930c-44dd-880d-64bc8d5be6ed name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:54 ha-422561 crio[515]: time="2025-10-03T18:47:54.381662129Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:47:54 ha-422561 crio[515]: time="2025-10-03T18:47:54.384933881Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:47:54 ha-422561 crio[515]: time="2025-10-03T18:47:54.385353711Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:47:54 ha-422561 crio[515]: time="2025-10-03T18:47:54.40180458Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=3ea7a0d1-930c-44dd-880d-64bc8d5be6ed name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:54 ha-422561 crio[515]: time="2025-10-03T18:47:54.403182084Z" level=info msg="createCtr: deleting container ID 7a611c0ab3283f6b99fed2effb7b8b0720a0984e0305374658f7f096c85882bf from idIndex" id=3ea7a0d1-930c-44dd-880d-64bc8d5be6ed name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:54 ha-422561 crio[515]: time="2025-10-03T18:47:54.403215572Z" level=info msg="createCtr: removing container 7a611c0ab3283f6b99fed2effb7b8b0720a0984e0305374658f7f096c85882bf" id=3ea7a0d1-930c-44dd-880d-64bc8d5be6ed name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:54 ha-422561 crio[515]: time="2025-10-03T18:47:54.403245417Z" level=info msg="createCtr: deleting container 7a611c0ab3283f6b99fed2effb7b8b0720a0984e0305374658f7f096c85882bf from storage" id=3ea7a0d1-930c-44dd-880d-64bc8d5be6ed name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:54 ha-422561 crio[515]: time="2025-10-03T18:47:54.405319115Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-422561_kube-system_2640157afe5e174d7402164688eed7be_0" id=3ea7a0d1-930c-44dd-880d-64bc8d5be6ed name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:55 ha-422561 crio[515]: time="2025-10-03T18:47:55.380490636Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=b2d90baf-ea14-4c65-9df7-911798bba832 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:47:55 ha-422561 crio[515]: time="2025-10-03T18:47:55.381483401Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=5e9ae564-4413-4905-924f-6780394f1541 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:47:55 ha-422561 crio[515]: time="2025-10-03T18:47:55.382474829Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-422561/kube-apiserver" id=f36d561c-4d04-4e3b-9334-b995b3fac21a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:55 ha-422561 crio[515]: time="2025-10-03T18:47:55.382703016Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:47:55 ha-422561 crio[515]: time="2025-10-03T18:47:55.386451823Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:47:55 ha-422561 crio[515]: time="2025-10-03T18:47:55.387079268Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:47:55 ha-422561 crio[515]: time="2025-10-03T18:47:55.401047494Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=f36d561c-4d04-4e3b-9334-b995b3fac21a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:55 ha-422561 crio[515]: time="2025-10-03T18:47:55.402408145Z" level=info msg="createCtr: deleting container ID d43432541d16db3ee5a96e53eacae47265812a2f006f9589780e72f155ba6ff4 from idIndex" id=f36d561c-4d04-4e3b-9334-b995b3fac21a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:55 ha-422561 crio[515]: time="2025-10-03T18:47:55.402441362Z" level=info msg="createCtr: removing container d43432541d16db3ee5a96e53eacae47265812a2f006f9589780e72f155ba6ff4" id=f36d561c-4d04-4e3b-9334-b995b3fac21a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:55 ha-422561 crio[515]: time="2025-10-03T18:47:55.40247631Z" level=info msg="createCtr: deleting container d43432541d16db3ee5a96e53eacae47265812a2f006f9589780e72f155ba6ff4 from storage" id=f36d561c-4d04-4e3b-9334-b995b3fac21a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:55 ha-422561 crio[515]: time="2025-10-03T18:47:55.40458633Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-422561_kube-system_6ecf19dd95945fcfeaff027fad95c1ee_0" id=f36d561c-4d04-4e3b-9334-b995b3fac21a name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:47:57.976742    2182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:47:57.977326    2182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:47:57.978853    2182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:47:57.979321    2182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:47:57.980862    2182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 3 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001870] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.374530] i8042: Warning: Keylock active
	[  +0.010846] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003424] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000658] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000637] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000691] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.479345] block sda: the capability attribute has been deprecated.
	[  +0.086934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025583] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.992810] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:47:58 up  1:30,  0 user,  load average: 0.10, 0.09, 0.08
	Linux ha-422561 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 03 18:47:49 ha-422561 kubelet[666]: E1003 18:47:49.578264     666 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-422561.186b0f4fb15ee27f  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-422561,UID:ha-422561,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-422561 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-422561,},FirstTimestamp:2025-10-03 18:41:54.370929279 +0000 UTC m=+0.067698676,LastTimestamp:2025-10-03 18:41:54.370929279 +0000 UTC m=+0.067698676,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-422561,}"
	Oct 03 18:47:50 ha-422561 kubelet[666]: E1003 18:47:50.019054     666 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-422561?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 03 18:47:50 ha-422561 kubelet[666]: I1003 18:47:50.179733     666 kubelet_node_status.go:75] "Attempting to register node" node="ha-422561"
	Oct 03 18:47:50 ha-422561 kubelet[666]: E1003 18:47:50.180147     666 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-422561"
	Oct 03 18:47:54 ha-422561 kubelet[666]: E1003 18:47:54.379374     666 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:47:54 ha-422561 kubelet[666]: E1003 18:47:54.398674     666 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-422561\" not found"
	Oct 03 18:47:54 ha-422561 kubelet[666]: E1003 18:47:54.405609     666 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:47:54 ha-422561 kubelet[666]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:47:54 ha-422561 kubelet[666]:  > podSandboxID="298774dbde189264a91a70e9924dc14a9e982805072e972c661c4befd3434c47"
	Oct 03 18:47:54 ha-422561 kubelet[666]: E1003 18:47:54.405711     666 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:47:54 ha-422561 kubelet[666]:         container kube-scheduler start failed in pod kube-scheduler-ha-422561_kube-system(2640157afe5e174d7402164688eed7be): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:47:54 ha-422561 kubelet[666]:  > logger="UnhandledError"
	Oct 03 18:47:54 ha-422561 kubelet[666]: E1003 18:47:54.405741     666 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-422561" podUID="2640157afe5e174d7402164688eed7be"
	Oct 03 18:47:55 ha-422561 kubelet[666]: E1003 18:47:55.379934     666 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:47:55 ha-422561 kubelet[666]: E1003 18:47:55.404877     666 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:47:55 ha-422561 kubelet[666]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:47:55 ha-422561 kubelet[666]:  > podSandboxID="434e99892ed1ce020750fc9407c91781adb3934c186862bfb34a22205e5e14f9"
	Oct 03 18:47:55 ha-422561 kubelet[666]: E1003 18:47:55.405020     666 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:47:55 ha-422561 kubelet[666]:         container kube-apiserver start failed in pod kube-apiserver-ha-422561_kube-system(6ecf19dd95945fcfeaff027fad95c1ee): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:47:55 ha-422561 kubelet[666]:  > logger="UnhandledError"
	Oct 03 18:47:55 ha-422561 kubelet[666]: E1003 18:47:55.405056     666 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-422561" podUID="6ecf19dd95945fcfeaff027fad95c1ee"
	Oct 03 18:47:57 ha-422561 kubelet[666]: E1003 18:47:57.020225     666 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-422561?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 03 18:47:57 ha-422561 kubelet[666]: E1003 18:47:57.044936     666 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-422561&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 03 18:47:57 ha-422561 kubelet[666]: I1003 18:47:57.181851     666 kubelet_node_status.go:75] "Attempting to register node" node="ha-422561"
	Oct 03 18:47:57 ha-422561 kubelet[666]: E1003 18:47:57.182288     666 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-422561"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561: exit status 2 (291.186214ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-422561" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (1.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-422561" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-422561\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-422561\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-422561\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":nul
l,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list
--output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-422561
helpers_test.go:243: (dbg) docker inspect ha-422561:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	        "Created": "2025-10-03T18:31:00.396132938Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 78305,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T18:41:48.184631345Z",
	            "FinishedAt": "2025-10-03T18:41:47.03312274Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hostname",
	        "HostsPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hosts",
	        "LogPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512-json.log",
	        "Name": "/ha-422561",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-422561:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-422561",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	                "LowerDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94-init/diff:/var/lib/docker/overlay2/6a517a7375440eba803d7b83fe1e0821915758396dd4d8556ab64fff322a60c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-422561",
	                "Source": "/var/lib/docker/volumes/ha-422561/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-422561",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-422561",
	                "name.minikube.sigs.k8s.io": "ha-422561",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f25e24c6846c3066ef61f48e15ea0bd5d93f4d074a9989652f5f017953ae54f4",
	            "SandboxKey": "/var/run/docker/netns/f25e24c6846c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-422561": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:25:3d:05:0c:10",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de6aa7ca29f453c0d15cb280abde7ee215f554c89e78e3db8a0f7590468114b5",
	                    "EndpointID": "ea9c702790bd5592b9af12355b48fa038276e1385318d9f8348f8ea08c72f59c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-422561",
	                        "eef8fc426b2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561: exit status 2 (295.394895ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                    ARGS                                     │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-422561 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml            │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- rollout status deployment/busybox                      │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.io                        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.default                   │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node add --alsologtostderr -v 5                                   │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node stop m02 --alsologtostderr -v 5                              │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node start m02 --alsologtostderr -v 5                             │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:41 UTC │                     │
	│ node    │ ha-422561 node list --alsologtostderr -v 5                                  │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:41 UTC │                     │
	│ stop    │ ha-422561 stop --alsologtostderr -v 5                                       │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:41 UTC │ 03 Oct 25 18:41 UTC │
	│ start   │ ha-422561 start --wait true --alsologtostderr -v 5                          │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:41 UTC │                     │
	│ node    │ ha-422561 node list --alsologtostderr -v 5                                  │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:47 UTC │                     │
	│ node    │ ha-422561 node delete m03 --alsologtostderr -v 5                            │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:47 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:41:47
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:41:47.965617   78109 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:41:47.965729   78109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:41:47.965734   78109 out.go:374] Setting ErrFile to fd 2...
	I1003 18:41:47.965738   78109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:41:47.965965   78109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:41:47.966407   78109 out.go:368] Setting JSON to false
	I1003 18:41:47.967236   78109 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5059,"bootTime":1759511849,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:41:47.967316   78109 start.go:140] virtualization: kvm guest
	I1003 18:41:47.969565   78109 out.go:179] * [ha-422561] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 18:41:47.970895   78109 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:41:47.970886   78109 notify.go:220] Checking for updates...
	I1003 18:41:47.973237   78109 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:41:47.974502   78109 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:41:47.976050   78109 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:41:47.980621   78109 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:41:47.982098   78109 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:41:47.983693   78109 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:41:47.983786   78109 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:41:48.006894   78109 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:41:48.006973   78109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:41:48.059814   78109 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:41:48.049141525 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:41:48.059970   78109 docker.go:318] overlay module found
	I1003 18:41:48.061805   78109 out.go:179] * Using the docker driver based on existing profile
	I1003 18:41:48.063100   78109 start.go:304] selected driver: docker
	I1003 18:41:48.063116   78109 start.go:924] validating driver "docker" against &{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:41:48.063193   78109 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:41:48.063271   78109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:41:48.115735   78109 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:41:48.106263176 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:41:48.116398   78109 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:41:48.116429   78109 cni.go:84] Creating CNI manager for ""
	I1003 18:41:48.116479   78109 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 18:41:48.116522   78109 start.go:348] cluster config:
	{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1003 18:41:48.118414   78109 out.go:179] * Starting "ha-422561" primary control-plane node in "ha-422561" cluster
	I1003 18:41:48.119473   78109 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:41:48.120615   78109 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:41:48.121657   78109 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:41:48.121692   78109 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 18:41:48.121702   78109 cache.go:58] Caching tarball of preloaded images
	I1003 18:41:48.121752   78109 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:41:48.121806   78109 preload.go:233] Found /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1003 18:41:48.121822   78109 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 18:41:48.121972   78109 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:41:48.141259   78109 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 18:41:48.141277   78109 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 18:41:48.141293   78109 cache.go:232] Successfully downloaded all kic artifacts
	I1003 18:41:48.141322   78109 start.go:360] acquireMachinesLock for ha-422561: {Name:mk32fd04a5d9b5f89831583bab7d7527f4d187a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:41:48.141381   78109 start.go:364] duration metric: took 38.503µs to acquireMachinesLock for "ha-422561"
	I1003 18:41:48.141404   78109 start.go:96] Skipping create...Using existing machine configuration
	I1003 18:41:48.141413   78109 fix.go:54] fixHost starting: 
	I1003 18:41:48.141623   78109 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:41:48.158697   78109 fix.go:112] recreateIfNeeded on ha-422561: state=Stopped err=<nil>
	W1003 18:41:48.158732   78109 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 18:41:48.160525   78109 out.go:252] * Restarting existing docker container for "ha-422561" ...
	I1003 18:41:48.160596   78109 cli_runner.go:164] Run: docker start ha-422561
	I1003 18:41:48.389421   78109 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:41:48.408957   78109 kic.go:430] container "ha-422561" state is running.
	I1003 18:41:48.409388   78109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:41:48.427176   78109 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:41:48.427382   78109 machine.go:93] provisionDockerMachine start ...
	I1003 18:41:48.427434   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:48.444729   78109 main.go:141] libmachine: Using SSH client type: native
	I1003 18:41:48.444951   78109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1003 18:41:48.444963   78109 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 18:41:48.445521   78109 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57550->127.0.0.1:32788: read: connection reset by peer
	I1003 18:41:51.588813   78109 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:41:51.588840   78109 ubuntu.go:182] provisioning hostname "ha-422561"
	I1003 18:41:51.588902   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:51.606073   78109 main.go:141] libmachine: Using SSH client type: native
	I1003 18:41:51.606334   78109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1003 18:41:51.606352   78109 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-422561 && echo "ha-422561" | sudo tee /etc/hostname
	I1003 18:41:51.755889   78109 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:41:51.755972   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:51.773186   78109 main.go:141] libmachine: Using SSH client type: native
	I1003 18:41:51.773469   78109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1003 18:41:51.773496   78109 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422561' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422561/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422561' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:41:51.915364   78109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:41:51.915397   78109 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8669/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8669/.minikube}
	I1003 18:41:51.915442   78109 ubuntu.go:190] setting up certificates
	I1003 18:41:51.915453   78109 provision.go:84] configureAuth start
	I1003 18:41:51.915501   78109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:41:51.932304   78109 provision.go:143] copyHostCerts
	I1003 18:41:51.932336   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:41:51.932369   78109 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem, removing ...
	I1003 18:41:51.932384   78109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:41:51.932460   78109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem (1675 bytes)
	I1003 18:41:51.932569   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:41:51.932592   78109 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem, removing ...
	I1003 18:41:51.932601   78109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:41:51.932644   78109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem (1082 bytes)
	I1003 18:41:51.932737   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:41:51.932762   78109 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem, removing ...
	I1003 18:41:51.932770   78109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:41:51.932806   78109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem (1123 bytes)
	I1003 18:41:51.932897   78109 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem org=jenkins.ha-422561 san=[127.0.0.1 192.168.49.2 ha-422561 localhost minikube]
	I1003 18:41:52.334530   78109 provision.go:177] copyRemoteCerts
	I1003 18:41:52.334597   78109 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:41:52.334648   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:52.352292   78109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:52.453048   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 18:41:52.453101   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 18:41:52.469816   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 18:41:52.469876   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1003 18:41:52.486010   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 18:41:52.486070   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 18:41:52.501699   78109 provision.go:87] duration metric: took 586.232853ms to configureAuth
	I1003 18:41:52.501734   78109 ubuntu.go:206] setting minikube options for container-runtime
	I1003 18:41:52.501896   78109 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:41:52.502010   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:52.519621   78109 main.go:141] libmachine: Using SSH client type: native
	I1003 18:41:52.519864   78109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1003 18:41:52.519881   78109 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 18:41:52.769003   78109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 18:41:52.769026   78109 machine.go:96] duration metric: took 4.34163143s to provisionDockerMachine
	I1003 18:41:52.769048   78109 start.go:293] postStartSetup for "ha-422561" (driver="docker")
	I1003 18:41:52.769058   78109 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:41:52.769105   78109 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:41:52.769141   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:52.785506   78109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:52.886607   78109 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:41:52.890099   78109 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:41:52.890126   78109 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 18:41:52.890138   78109 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/addons for local assets ...
	I1003 18:41:52.890200   78109 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/files for local assets ...
	I1003 18:41:52.890302   78109 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> 122122.pem in /etc/ssl/certs
	I1003 18:41:52.890314   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /etc/ssl/certs/122122.pem
	I1003 18:41:52.890418   78109 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 18:41:52.897610   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:41:52.913799   78109 start.go:296] duration metric: took 144.73798ms for postStartSetup
	I1003 18:41:52.913880   78109 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:41:52.913916   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:52.931323   78109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:53.028846   78109 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:41:53.033147   78109 fix.go:56] duration metric: took 4.891729968s for fixHost
	I1003 18:41:53.033174   78109 start.go:83] releasing machines lock for "ha-422561", held for 4.891773851s
	I1003 18:41:53.033222   78109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:41:53.050737   78109 ssh_runner.go:195] Run: cat /version.json
	I1003 18:41:53.050798   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:53.050812   78109 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:41:53.050904   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:53.068768   78109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:53.069109   78109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:53.215897   78109 ssh_runner.go:195] Run: systemctl --version
	I1003 18:41:53.222143   78109 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 18:41:53.254998   78109 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 18:41:53.259516   78109 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 18:41:53.259571   78109 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 18:41:53.267402   78109 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1003 18:41:53.267422   78109 start.go:495] detecting cgroup driver to use...
	I1003 18:41:53.267447   78109 detect.go:190] detected "systemd" cgroup driver on host os
	I1003 18:41:53.267478   78109 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 18:41:53.280584   78109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 18:41:53.291928   78109 docker.go:218] disabling cri-docker service (if available) ...
	I1003 18:41:53.292007   78109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 18:41:53.305410   78109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 18:41:53.316686   78109 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 18:41:53.392708   78109 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 18:41:53.468550   78109 docker.go:234] disabling docker service ...
	I1003 18:41:53.468603   78109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 18:41:53.481912   78109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 18:41:53.493296   78109 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 18:41:53.564617   78109 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 18:41:53.641361   78109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 18:41:53.653265   78109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:41:53.666452   78109 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 18:41:53.666512   78109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:41:53.674871   78109 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 18:41:53.674918   78109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:41:53.682900   78109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:41:53.690672   78109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:41:53.698507   78109 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:41:53.705820   78109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:41:53.714091   78109 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:41:53.721884   78109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:41:53.729698   78109 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:41:53.736355   78109 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:41:53.743414   78109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:41:53.819717   78109 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 18:41:53.919600   78109 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 18:41:53.919651   78109 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 18:41:53.923478   78109 start.go:563] Will wait 60s for crictl version
	I1003 18:41:53.923531   78109 ssh_runner.go:195] Run: which crictl
	I1003 18:41:53.926886   78109 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 18:41:53.950693   78109 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 18:41:53.950780   78109 ssh_runner.go:195] Run: crio --version
	I1003 18:41:53.978079   78109 ssh_runner.go:195] Run: crio --version
	I1003 18:41:54.006095   78109 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 18:41:54.007432   78109 cli_runner.go:164] Run: docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:41:54.024727   78109 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 18:41:54.028676   78109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:41:54.038280   78109 kubeadm.go:883] updating cluster {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 18:41:54.038374   78109 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:41:54.038416   78109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:41:54.069216   78109 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:41:54.069235   78109 crio.go:433] Images already preloaded, skipping extraction
	I1003 18:41:54.069278   78109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:41:54.093835   78109 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:41:54.093853   78109 cache_images.go:85] Images are preloaded, skipping loading
	I1003 18:41:54.093861   78109 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1003 18:41:54.093958   78109 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422561 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 18:41:54.094039   78109 ssh_runner.go:195] Run: crio config
	I1003 18:41:54.139191   78109 cni.go:84] Creating CNI manager for ""
	I1003 18:41:54.139209   78109 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 18:41:54.139225   78109 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 18:41:54.139251   78109 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-422561 NodeName:ha-422561 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 18:41:54.139393   78109 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-422561"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:41:54.139467   78109 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 18:41:54.147298   78109 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:41:54.147347   78109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 18:41:54.154482   78109 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1003 18:41:54.165970   78109 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 18:41:54.177461   78109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1003 18:41:54.189120   78109 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1003 18:41:54.192398   78109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:41:54.201452   78109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:41:54.277696   78109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:41:54.301361   78109 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561 for IP: 192.168.49.2
	I1003 18:41:54.301380   78109 certs.go:195] generating shared ca certs ...
	I1003 18:41:54.301396   78109 certs.go:227] acquiring lock for ca certs: {Name:mk92d1e8e469cb44d9924ff8abf5ecf0a8ce4e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:41:54.301531   78109 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key
	I1003 18:41:54.301567   78109 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key
	I1003 18:41:54.301574   78109 certs.go:257] generating profile certs ...
	I1003 18:41:54.301678   78109 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key
	I1003 18:41:54.301704   78109 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2ce2e456
	I1003 18:41:54.301719   78109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2ce2e456 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1003 18:41:54.485656   78109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2ce2e456 ...
	I1003 18:41:54.485682   78109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2ce2e456: {Name:mkd64166271c8ed4363a27c4beb22c76efb402ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:41:54.485857   78109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2ce2e456 ...
	I1003 18:41:54.485874   78109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2ce2e456: {Name:mk21609dadb3006e0ff5fcda633cac720af9cd26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:41:54.485999   78109 certs.go:382] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt.2ce2e456 -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt
	I1003 18:41:54.486165   78109 certs.go:386] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2ce2e456 -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key
	I1003 18:41:54.486296   78109 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key
	I1003 18:41:54.486314   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 18:41:54.486329   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 18:41:54.486342   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 18:41:54.486355   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 18:41:54.486366   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 18:41:54.486378   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 18:41:54.486390   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 18:41:54.486400   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 18:41:54.486447   78109 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem (1338 bytes)
	W1003 18:41:54.486488   78109 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212_empty.pem, impossibly tiny 0 bytes
	I1003 18:41:54.486499   78109 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:41:54.486520   78109 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:41:54.486541   78109 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:41:54.486562   78109 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem (1675 bytes)
	I1003 18:41:54.486601   78109 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:41:54.486625   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /usr/share/ca-certificates/122122.pem
	I1003 18:41:54.486639   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:41:54.486651   78109 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem -> /usr/share/ca-certificates/12212.pem
	I1003 18:41:54.487214   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:41:54.504245   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 18:41:54.520954   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:41:54.537040   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:41:54.552996   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1003 18:41:54.568727   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:41:54.584994   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:41:54.600897   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 18:41:54.616824   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /usr/share/ca-certificates/122122.pem (1708 bytes)
	I1003 18:41:54.632722   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:41:54.648244   78109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem --> /usr/share/ca-certificates/12212.pem (1338 bytes)
	I1003 18:41:54.663803   78109 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:41:54.675418   78109 ssh_runner.go:195] Run: openssl version
	I1003 18:41:54.681349   78109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122122.pem && ln -fs /usr/share/ca-certificates/122122.pem /etc/ssl/certs/122122.pem"
	I1003 18:41:54.689100   78109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122122.pem
	I1003 18:41:54.692442   78109 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:41:54.692485   78109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122122.pem
	I1003 18:41:54.725859   78109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122122.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:41:54.733505   78109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:41:54.741265   78109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:41:54.744606   78109 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:41:54.744646   78109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:41:54.777788   78109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:41:54.785887   78109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12212.pem && ln -fs /usr/share/ca-certificates/12212.pem /etc/ssl/certs/12212.pem"
	I1003 18:41:54.795297   78109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12212.pem
	I1003 18:41:54.799237   78109 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:41:54.799288   78109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12212.pem
	I1003 18:41:54.846396   78109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12212.pem /etc/ssl/certs/51391683.0"
	I1003 18:41:54.855755   78109 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:41:54.860752   78109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 18:41:54.896634   78109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 18:41:54.930605   78109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 18:41:54.965096   78109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 18:41:54.998440   78109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 18:41:55.031641   78109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 18:41:55.065037   78109 kubeadm.go:400] StartCluster: {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:41:55.065123   78109 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:41:55.065170   78109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:41:55.091392   78109 cri.go:89] found id: ""
	I1003 18:41:55.091469   78109 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:41:55.099200   78109 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1003 18:41:55.099217   78109 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1003 18:41:55.099258   78109 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 18:41:55.106032   78109 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:41:55.106375   78109 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:41:55.106505   78109 kubeconfig.go:62] /home/jenkins/minikube-integration/21625-8669/kubeconfig needs updating (will repair): [kubeconfig missing "ha-422561" cluster setting kubeconfig missing "ha-422561" context setting]
	I1003 18:41:55.106770   78109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/kubeconfig: {Name:mk6b7939515483ba69c1f358a3a21494f4ead7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:41:55.107315   78109 kapi.go:59] client config for ha-422561: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key", CAFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 18:41:55.107724   78109 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1003 18:41:55.107739   78109 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1003 18:41:55.107743   78109 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1003 18:41:55.107747   78109 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1003 18:41:55.107750   78109 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1003 18:41:55.107810   78109 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1003 18:41:55.108143   78109 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 18:41:55.114940   78109 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1003 18:41:55.114964   78109 kubeadm.go:601] duration metric: took 15.74296ms to restartPrimaryControlPlane
	I1003 18:41:55.114971   78109 kubeadm.go:402] duration metric: took 49.946332ms to StartCluster
	I1003 18:41:55.115005   78109 settings.go:142] acquiring lock: {Name:mk6bc950503a8f341b8aacc07a8bc72d5db3a25c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:41:55.115056   78109 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:41:55.115531   78109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/kubeconfig: {Name:mk6b7939515483ba69c1f358a3a21494f4ead7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:41:55.115741   78109 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 18:41:55.115824   78109 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 18:41:55.115919   78109 addons.go:69] Setting storage-provisioner=true in profile "ha-422561"
	I1003 18:41:55.115938   78109 addons.go:238] Setting addon storage-provisioner=true in "ha-422561"
	I1003 18:41:55.115942   78109 addons.go:69] Setting default-storageclass=true in profile "ha-422561"
	I1003 18:41:55.115958   78109 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:41:55.115972   78109 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-422561"
	I1003 18:41:55.115989   78109 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:41:55.116225   78109 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:41:55.116452   78109 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:41:55.119136   78109 out.go:179] * Verifying Kubernetes components...
	I1003 18:41:55.120238   78109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:41:55.134787   78109 kapi.go:59] client config for ha-422561: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key", CAFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 18:41:55.135133   78109 addons.go:238] Setting addon default-storageclass=true in "ha-422561"
	I1003 18:41:55.135168   78109 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:41:55.135538   78109 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:41:55.137640   78109 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 18:41:55.138668   78109 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:41:55.138683   78109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 18:41:55.138728   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:55.162278   78109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:55.162572   78109 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 18:41:55.162597   78109 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 18:41:55.163241   78109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:41:55.182395   78109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:41:55.225739   78109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:41:55.238233   78109 node_ready.go:35] waiting up to 6m0s for node "ha-422561" to be "Ready" ...
	I1003 18:41:55.270587   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:41:55.287076   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:41:55.326555   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.326612   78109 retry.go:31] will retry after 238.443182ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:41:55.340406   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.340437   78109 retry.go:31] will retry after 153.323458ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.494856   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:41:55.546128   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.546164   78109 retry.go:31] will retry after 276.912874ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.565279   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:41:55.615128   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.615158   78109 retry.go:31] will retry after 342.439843ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.823993   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:41:55.875529   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.875561   78109 retry.go:31] will retry after 400.772518ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:55.957790   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:41:56.007576   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:56.007610   78109 retry.go:31] will retry after 687.440576ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:56.276587   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:41:56.327516   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:56.327545   78109 retry.go:31] will retry after 708.287937ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:56.696027   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:41:56.746649   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:56.746684   78109 retry.go:31] will retry after 518.211932ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:57.036088   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:41:57.086704   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:57.086738   78109 retry.go:31] will retry after 1.376791265s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:41:57.239372   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:41:57.265499   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:41:57.317068   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:57.317108   78109 retry.go:31] will retry after 1.177919083s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:58.464531   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:41:58.496033   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:41:58.515496   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:58.515532   78109 retry.go:31] will retry after 2.33145046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:41:58.546625   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:41:58.546674   78109 retry.go:31] will retry after 1.629869087s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:41:59.239446   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:00.176874   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:42:00.227112   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:00.227140   78109 retry.go:31] will retry after 3.908061892s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:00.847842   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:42:00.898437   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:00.898463   78109 retry.go:31] will retry after 4.123747597s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:01.739288   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:04.135743   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:42:04.186702   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:04.186732   78109 retry.go:31] will retry after 3.995977252s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:04.239305   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:05.022578   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:42:05.073779   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:05.073811   78109 retry.go:31] will retry after 4.388328001s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:06.738802   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:08.183159   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:42:08.234120   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:08.234149   78109 retry.go:31] will retry after 3.547774861s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:08.739679   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:09.463080   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:42:09.513268   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:09.513304   78109 retry.go:31] will retry after 8.911463673s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:11.238822   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:11.782937   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:42:11.834357   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:11.834385   78109 retry.go:31] will retry after 8.693528714s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:13.239500   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:15.239549   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:17.739446   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:18.424887   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:42:18.475151   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:18.475186   78109 retry.go:31] will retry after 7.904227635s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:20.239011   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:20.528449   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:42:20.580777   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:20.580809   78109 retry.go:31] will retry after 20.11601788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:22.738834   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:24.739199   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:26.379921   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:42:26.431319   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:26.431348   78109 retry.go:31] will retry after 20.573768491s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:27.239280   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:29.738800   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:31.739121   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:33.739413   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:36.238812   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:38.238926   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:40.239194   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:40.697768   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:42:40.749547   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:40.749578   78109 retry.go:31] will retry after 30.248373016s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:42.239773   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:44.739009   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:46.739534   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:47.005919   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:42:47.057465   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:47.057491   78109 retry.go:31] will retry after 12.288685106s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:42:49.239699   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:51.739043   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:53.739508   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:56.238897   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:42:58.239429   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:42:59.346896   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:42:59.401998   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:42:59.402035   78109 retry.go:31] will retry after 35.671655983s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:43:00.239715   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:02.239754   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:04.239822   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:06.739643   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:08.739717   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:43:10.998923   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:43:11.051273   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:43:11.051306   78109 retry.go:31] will retry after 26.001187567s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:43:11.238952   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:13.239575   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:15.738938   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:17.739263   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:20.238878   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:22.239089   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:24.239684   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:26.738761   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:28.738919   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:30.739086   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:32.739359   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:34.739740   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:43:35.074123   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:43:35.124772   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:43:35.124883   78109 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1003 18:43:37.053183   78109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:43:37.104592   78109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:43:37.104700   78109 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1003 18:43:37.106533   78109 out.go:179] * Enabled addons: 
	I1003 18:43:37.107764   78109 addons.go:514] duration metric: took 1m41.991949037s for enable addons: enabled=[]
	W1003 18:43:37.239332   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:39.738898   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:42.238941   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:44.239082   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:46.239268   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:48.239582   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:50.738800   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:52.738881   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:54.739056   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:57.239071   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:43:59.239207   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:01.239478   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:03.738847   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:05.739101   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:07.739198   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:09.739482   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:12.238792   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:14.238963   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:16.239203   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:18.239564   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:20.738823   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:22.738917   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:24.739018   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:26.739400   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:28.739723   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:31.238840   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:33.239009   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:35.239259   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:37.239746   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:39.739042   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:41.739269   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:43.739600   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:46.238810   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:48.238919   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:50.239098   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:52.739028   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:54.739369   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:56.739650   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:44:59.238815   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:01.238933   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:03.239302   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:05.239684   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:07.738918   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:09.739100   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:11.739372   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:13.739747   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:16.238933   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:18.239051   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:20.239294   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:22.239715   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:24.738810   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:26.739034   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:28.739364   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:30.739770   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:33.239063   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:35.239425   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:37.239744   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:39.739151   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:41.739685   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:44.239046   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:46.239503   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:48.738957   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:50.739269   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:52.739747   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:55.239459   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:57.739152   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:45:59.739697   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:02.238935   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:04.239446   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:06.738747   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:08.738816   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:10.738937   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:12.739182   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:14.739698   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:17.238816   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:19.239006   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:21.239256   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:23.239603   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:25.739083   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:28.238903   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:30.239210   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:32.239740   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:34.738942   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:36.739260   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:38.739610   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:41.238773   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:43.239024   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:45.239316   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:47.239690   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:49.738813   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:52.238811   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:54.238890   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:56.239083   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:46:58.239334   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:00.239577   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:02.738811   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:04.739001   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:06.739758   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:09.239542   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:11.239643   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:13.738883   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:15.739070   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:17.739168   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:19.739551   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:22.238767   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:24.238867   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:26.239004   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:28.239235   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:30.239573   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:32.738879   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:34.738922   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:36.739197   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:38.739499   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:40.739749   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:43.238901   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:45.239199   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:47.239460   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:49.738763   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:51.738856   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:47:53.739191   78109 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:47:55.238470   78109 node_ready.go:38] duration metric: took 6m0.000189393s for node "ha-422561" to be "Ready" ...
	I1003 18:47:55.241057   78109 out.go:203] 
	W1003 18:47:55.242227   78109 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1003 18:47:55.242242   78109 out.go:285] * 
	W1003 18:47:55.243958   78109 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:47:55.245321   78109 out.go:203] 
	
	
	==> CRI-O <==
	Oct 03 18:47:47 ha-422561 crio[515]: time="2025-10-03T18:47:47.403076329Z" level=info msg="createCtr: removing container a42340affc3dc1d7a7857706c661f39ed18d69be895ad389bfaa31213cdb8268" id=ffe0e08a-bdbb-475b-b927-415f00674390 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:47 ha-422561 crio[515]: time="2025-10-03T18:47:47.403110206Z" level=info msg="createCtr: deleting container a42340affc3dc1d7a7857706c661f39ed18d69be895ad389bfaa31213cdb8268 from storage" id=ffe0e08a-bdbb-475b-b927-415f00674390 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:47 ha-422561 crio[515]: time="2025-10-03T18:47:47.404950235Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-422561_kube-system_e643a03771f1e72f527532eff2c66a9c_0" id=ffe0e08a-bdbb-475b-b927-415f00674390 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:54 ha-422561 crio[515]: time="2025-10-03T18:47:54.379748398Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=90d0d05e-00c4-4a59-9dce-7cb1a0f28e4d name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:47:54 ha-422561 crio[515]: time="2025-10-03T18:47:54.38052727Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=11fb8911-3d19-4cb4-a6e8-a5edee3b070d name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:47:54 ha-422561 crio[515]: time="2025-10-03T18:47:54.381440908Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-422561/kube-scheduler" id=3ea7a0d1-930c-44dd-880d-64bc8d5be6ed name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:54 ha-422561 crio[515]: time="2025-10-03T18:47:54.381662129Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:47:54 ha-422561 crio[515]: time="2025-10-03T18:47:54.384933881Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:47:54 ha-422561 crio[515]: time="2025-10-03T18:47:54.385353711Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:47:54 ha-422561 crio[515]: time="2025-10-03T18:47:54.40180458Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=3ea7a0d1-930c-44dd-880d-64bc8d5be6ed name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:54 ha-422561 crio[515]: time="2025-10-03T18:47:54.403182084Z" level=info msg="createCtr: deleting container ID 7a611c0ab3283f6b99fed2effb7b8b0720a0984e0305374658f7f096c85882bf from idIndex" id=3ea7a0d1-930c-44dd-880d-64bc8d5be6ed name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:54 ha-422561 crio[515]: time="2025-10-03T18:47:54.403215572Z" level=info msg="createCtr: removing container 7a611c0ab3283f6b99fed2effb7b8b0720a0984e0305374658f7f096c85882bf" id=3ea7a0d1-930c-44dd-880d-64bc8d5be6ed name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:54 ha-422561 crio[515]: time="2025-10-03T18:47:54.403245417Z" level=info msg="createCtr: deleting container 7a611c0ab3283f6b99fed2effb7b8b0720a0984e0305374658f7f096c85882bf from storage" id=3ea7a0d1-930c-44dd-880d-64bc8d5be6ed name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:54 ha-422561 crio[515]: time="2025-10-03T18:47:54.405319115Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-422561_kube-system_2640157afe5e174d7402164688eed7be_0" id=3ea7a0d1-930c-44dd-880d-64bc8d5be6ed name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:55 ha-422561 crio[515]: time="2025-10-03T18:47:55.380490636Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=b2d90baf-ea14-4c65-9df7-911798bba832 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:47:55 ha-422561 crio[515]: time="2025-10-03T18:47:55.381483401Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=5e9ae564-4413-4905-924f-6780394f1541 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:47:55 ha-422561 crio[515]: time="2025-10-03T18:47:55.382474829Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-422561/kube-apiserver" id=f36d561c-4d04-4e3b-9334-b995b3fac21a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:55 ha-422561 crio[515]: time="2025-10-03T18:47:55.382703016Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:47:55 ha-422561 crio[515]: time="2025-10-03T18:47:55.386451823Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:47:55 ha-422561 crio[515]: time="2025-10-03T18:47:55.387079268Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:47:55 ha-422561 crio[515]: time="2025-10-03T18:47:55.401047494Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=f36d561c-4d04-4e3b-9334-b995b3fac21a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:55 ha-422561 crio[515]: time="2025-10-03T18:47:55.402408145Z" level=info msg="createCtr: deleting container ID d43432541d16db3ee5a96e53eacae47265812a2f006f9589780e72f155ba6ff4 from idIndex" id=f36d561c-4d04-4e3b-9334-b995b3fac21a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:55 ha-422561 crio[515]: time="2025-10-03T18:47:55.402441362Z" level=info msg="createCtr: removing container d43432541d16db3ee5a96e53eacae47265812a2f006f9589780e72f155ba6ff4" id=f36d561c-4d04-4e3b-9334-b995b3fac21a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:55 ha-422561 crio[515]: time="2025-10-03T18:47:55.40247631Z" level=info msg="createCtr: deleting container d43432541d16db3ee5a96e53eacae47265812a2f006f9589780e72f155ba6ff4 from storage" id=f36d561c-4d04-4e3b-9334-b995b3fac21a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:47:55 ha-422561 crio[515]: time="2025-10-03T18:47:55.40458633Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-422561_kube-system_6ecf19dd95945fcfeaff027fad95c1ee_0" id=f36d561c-4d04-4e3b-9334-b995b3fac21a name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:47:59.535276    2352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:47:59.535709    2352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:47:59.537316    2352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:47:59.537672    2352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:47:59.539223    2352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 3 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001870] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.374530] i8042: Warning: Keylock active
	[  +0.010846] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003424] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000658] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000637] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000691] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.479345] block sda: the capability attribute has been deprecated.
	[  +0.086934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025583] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.992810] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:47:59 up  1:30,  0 user,  load average: 0.10, 0.09, 0.08
	Linux ha-422561 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 03 18:47:50 ha-422561 kubelet[666]: E1003 18:47:50.019054     666 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-422561?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 03 18:47:50 ha-422561 kubelet[666]: I1003 18:47:50.179733     666 kubelet_node_status.go:75] "Attempting to register node" node="ha-422561"
	Oct 03 18:47:50 ha-422561 kubelet[666]: E1003 18:47:50.180147     666 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-422561"
	Oct 03 18:47:54 ha-422561 kubelet[666]: E1003 18:47:54.379374     666 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:47:54 ha-422561 kubelet[666]: E1003 18:47:54.398674     666 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-422561\" not found"
	Oct 03 18:47:54 ha-422561 kubelet[666]: E1003 18:47:54.405609     666 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:47:54 ha-422561 kubelet[666]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:47:54 ha-422561 kubelet[666]:  > podSandboxID="298774dbde189264a91a70e9924dc14a9e982805072e972c661c4befd3434c47"
	Oct 03 18:47:54 ha-422561 kubelet[666]: E1003 18:47:54.405711     666 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:47:54 ha-422561 kubelet[666]:         container kube-scheduler start failed in pod kube-scheduler-ha-422561_kube-system(2640157afe5e174d7402164688eed7be): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:47:54 ha-422561 kubelet[666]:  > logger="UnhandledError"
	Oct 03 18:47:54 ha-422561 kubelet[666]: E1003 18:47:54.405741     666 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-422561" podUID="2640157afe5e174d7402164688eed7be"
	Oct 03 18:47:55 ha-422561 kubelet[666]: E1003 18:47:55.379934     666 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:47:55 ha-422561 kubelet[666]: E1003 18:47:55.404877     666 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:47:55 ha-422561 kubelet[666]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:47:55 ha-422561 kubelet[666]:  > podSandboxID="434e99892ed1ce020750fc9407c91781adb3934c186862bfb34a22205e5e14f9"
	Oct 03 18:47:55 ha-422561 kubelet[666]: E1003 18:47:55.405020     666 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:47:55 ha-422561 kubelet[666]:         container kube-apiserver start failed in pod kube-apiserver-ha-422561_kube-system(6ecf19dd95945fcfeaff027fad95c1ee): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:47:55 ha-422561 kubelet[666]:  > logger="UnhandledError"
	Oct 03 18:47:55 ha-422561 kubelet[666]: E1003 18:47:55.405056     666 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-422561" podUID="6ecf19dd95945fcfeaff027fad95c1ee"
	Oct 03 18:47:57 ha-422561 kubelet[666]: E1003 18:47:57.020225     666 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-422561?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 03 18:47:57 ha-422561 kubelet[666]: E1003 18:47:57.044936     666 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-422561&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 03 18:47:57 ha-422561 kubelet[666]: I1003 18:47:57.181851     666 kubelet_node_status.go:75] "Attempting to register node" node="ha-422561"
	Oct 03 18:47:57 ha-422561 kubelet[666]: E1003 18:47:57.182288     666 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-422561"
	Oct 03 18:47:59 ha-422561 kubelet[666]: E1003 18:47:59.579729     666 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-422561.186b0f4fb15ee27f  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-422561,UID:ha-422561,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-422561 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-422561,},FirstTimestamp:2025-10-03 18:41:54.370929279 +0000 UTC m=+0.067698676,LastTimestamp:2025-10-03 18:41:54.370929279 +0000 UTC m=+0.067698676,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-422561,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561: exit status 2 (290.690774ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-422561" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (1.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-422561 stop --alsologtostderr -v 5: (1.218487994s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 status --alsologtostderr -v 5: exit status 7 (75.550002ms)

                                                
                                                
-- stdout --
	ha-422561
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:48:01.184845   83640 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:48:01.185108   83640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:48:01.185117   83640 out.go:374] Setting ErrFile to fd 2...
	I1003 18:48:01.185122   83640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:48:01.185314   83640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:48:01.185474   83640 out.go:368] Setting JSON to false
	I1003 18:48:01.185503   83640 mustload.go:65] Loading cluster: ha-422561
	I1003 18:48:01.185631   83640 notify.go:220] Checking for updates...
	I1003 18:48:01.185825   83640 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:48:01.185837   83640 status.go:174] checking status of ha-422561 ...
	I1003 18:48:01.186316   83640 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:48:01.203256   83640 status.go:371] ha-422561 host status = "Stopped" (err=<nil>)
	I1003 18:48:01.203274   83640 status.go:384] host is not running, skipping remaining checks
	I1003 18:48:01.203280   83640 status.go:176] ha-422561 status: &{Name:ha-422561 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-422561 status --alsologtostderr -v 5": ha-422561
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-422561 status --alsologtostderr -v 5": ha-422561
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-422561 status --alsologtostderr -v 5": ha-422561
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-422561
helpers_test.go:243: (dbg) docker inspect ha-422561:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	        "Created": "2025-10-03T18:31:00.396132938Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "",
	            "StartedAt": "2025-10-03T18:41:48.184631345Z",
	            "FinishedAt": "2025-10-03T18:48:00.240128679Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hostname",
	        "HostsPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hosts",
	        "LogPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512-json.log",
	        "Name": "/ha-422561",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-422561:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-422561",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	                "LowerDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94-init/diff:/var/lib/docker/overlay2/6a517a7375440eba803d7b83fe1e0821915758396dd4d8556ab64fff322a60c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-422561",
	                "Source": "/var/lib/docker/volumes/ha-422561/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-422561",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-422561",
	                "name.minikube.sigs.k8s.io": "ha-422561",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "SandboxKey": "",
	            "Ports": {},
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-422561": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de6aa7ca29f453c0d15cb280abde7ee215f554c89e78e3db8a0f7590468114b5",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-422561",
	                        "eef8fc426b2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561: exit status 7 (78.966577ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 7 (may be ok)
helpers_test.go:249: "ha-422561" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (1.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (368.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1003 18:48:14.912530   12212 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:51:51.829820   12212 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: exit status 80 (6m7.004058845s)

                                                
                                                
-- stdout --
	* [ha-422561] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21625
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-422561" primary control-plane node in "ha-422561" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:48:01.358006   83697 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:48:01.358289   83697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:48:01.358300   83697 out.go:374] Setting ErrFile to fd 2...
	I1003 18:48:01.358305   83697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:48:01.358536   83697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:48:01.358996   83697 out.go:368] Setting JSON to false
	I1003 18:48:01.359863   83697 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5432,"bootTime":1759511849,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:48:01.359957   83697 start.go:140] virtualization: kvm guest
	I1003 18:48:01.362210   83697 out.go:179] * [ha-422561] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 18:48:01.363666   83697 notify.go:220] Checking for updates...
	I1003 18:48:01.363675   83697 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:48:01.365090   83697 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:48:01.366363   83697 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:48:01.367623   83697 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:48:01.368893   83697 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:48:01.370300   83697 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:48:01.372005   83697 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:48:01.372415   83697 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:48:01.396617   83697 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:48:01.396706   83697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:48:01.448802   83697 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:48:01.439437332 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:48:01.448910   83697 docker.go:318] overlay module found
	I1003 18:48:01.450884   83697 out.go:179] * Using the docker driver based on existing profile
	I1003 18:48:01.452231   83697 start.go:304] selected driver: docker
	I1003 18:48:01.452246   83697 start.go:924] validating driver "docker" against &{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:48:01.452322   83697 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:48:01.452405   83697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:48:01.509159   83697 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:48:01.498948046 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:48:01.509757   83697 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:48:01.509786   83697 cni.go:84] Creating CNI manager for ""
	I1003 18:48:01.509833   83697 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 18:48:01.509876   83697 start.go:348] cluster config:
	{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1003 18:48:01.511871   83697 out.go:179] * Starting "ha-422561" primary control-plane node in "ha-422561" cluster
	I1003 18:48:01.513298   83697 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:48:01.514481   83697 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:48:01.515584   83697 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:48:01.515621   83697 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:48:01.515631   83697 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 18:48:01.515642   83697 cache.go:58] Caching tarball of preloaded images
	I1003 18:48:01.515725   83697 preload.go:233] Found /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1003 18:48:01.515744   83697 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 18:48:01.515874   83697 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:48:01.536348   83697 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 18:48:01.536367   83697 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 18:48:01.536383   83697 cache.go:232] Successfully downloaded all kic artifacts
	I1003 18:48:01.536411   83697 start.go:360] acquireMachinesLock for ha-422561: {Name:mk32fd04a5d9b5f89831583bab7d7527f4d187a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:48:01.536466   83697 start.go:364] duration metric: took 37.424µs to acquireMachinesLock for "ha-422561"
	I1003 18:48:01.536482   83697 start.go:96] Skipping create...Using existing machine configuration
	I1003 18:48:01.536489   83697 fix.go:54] fixHost starting: 
	I1003 18:48:01.536680   83697 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:48:01.553807   83697 fix.go:112] recreateIfNeeded on ha-422561: state=Stopped err=<nil>
	W1003 18:48:01.553839   83697 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 18:48:01.555613   83697 out.go:252] * Restarting existing docker container for "ha-422561" ...
	I1003 18:48:01.555684   83697 cli_runner.go:164] Run: docker start ha-422561
	I1003 18:48:01.796448   83697 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:48:01.815210   83697 kic.go:430] container "ha-422561" state is running.
	I1003 18:48:01.815590   83697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:48:01.834439   83697 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:48:01.834700   83697 machine.go:93] provisionDockerMachine start ...
	I1003 18:48:01.834770   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:01.852545   83697 main.go:141] libmachine: Using SSH client type: native
	I1003 18:48:01.852799   83697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1003 18:48:01.852812   83697 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 18:48:01.853394   83697 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49686->127.0.0.1:32793: read: connection reset by peer
	I1003 18:48:04.996743   83697 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:48:04.996769   83697 ubuntu.go:182] provisioning hostname "ha-422561"
	I1003 18:48:04.996830   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:05.013852   83697 main.go:141] libmachine: Using SSH client type: native
	I1003 18:48:05.014117   83697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1003 18:48:05.014132   83697 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-422561 && echo "ha-422561" | sudo tee /etc/hostname
	I1003 18:48:05.165019   83697 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:48:05.165102   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:05.183718   83697 main.go:141] libmachine: Using SSH client type: native
	I1003 18:48:05.183927   83697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1003 18:48:05.183944   83697 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422561' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422561/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422561' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:48:05.326262   83697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:48:05.326300   83697 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8669/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8669/.minikube}
	I1003 18:48:05.326346   83697 ubuntu.go:190] setting up certificates
	I1003 18:48:05.326359   83697 provision.go:84] configureAuth start
	I1003 18:48:05.326433   83697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:48:05.343930   83697 provision.go:143] copyHostCerts
	I1003 18:48:05.343993   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:48:05.344029   83697 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem, removing ...
	I1003 18:48:05.344046   83697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:48:05.344123   83697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem (1082 bytes)
	I1003 18:48:05.344224   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:48:05.344246   83697 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem, removing ...
	I1003 18:48:05.344254   83697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:48:05.344285   83697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem (1123 bytes)
	I1003 18:48:05.344349   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:48:05.344369   83697 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem, removing ...
	I1003 18:48:05.344376   83697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:48:05.344403   83697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem (1675 bytes)
	I1003 18:48:05.344471   83697 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem org=jenkins.ha-422561 san=[127.0.0.1 192.168.49.2 ha-422561 localhost minikube]
	I1003 18:48:05.548175   83697 provision.go:177] copyRemoteCerts
	I1003 18:48:05.548237   83697 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:48:05.548272   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:05.565560   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:05.665910   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 18:48:05.665989   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 18:48:05.683091   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 18:48:05.683139   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1003 18:48:05.699514   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 18:48:05.699586   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 18:48:05.716017   83697 provision.go:87] duration metric: took 389.640217ms to configureAuth
	I1003 18:48:05.716044   83697 ubuntu.go:206] setting minikube options for container-runtime
	I1003 18:48:05.716221   83697 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:48:05.716337   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:05.735187   83697 main.go:141] libmachine: Using SSH client type: native
	I1003 18:48:05.735436   83697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1003 18:48:05.735459   83697 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 18:48:05.988283   83697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 18:48:05.988310   83697 machine.go:96] duration metric: took 4.153593591s to provisionDockerMachine
	I1003 18:48:05.988321   83697 start.go:293] postStartSetup for "ha-422561" (driver="docker")
	I1003 18:48:05.988333   83697 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:48:05.988396   83697 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:48:05.988435   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:06.005743   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:06.106231   83697 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:48:06.109622   83697 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:48:06.109647   83697 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 18:48:06.109656   83697 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/addons for local assets ...
	I1003 18:48:06.109722   83697 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/files for local assets ...
	I1003 18:48:06.109816   83697 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> 122122.pem in /etc/ssl/certs
	I1003 18:48:06.109829   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /etc/ssl/certs/122122.pem
	I1003 18:48:06.109949   83697 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 18:48:06.117171   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:48:06.133466   83697 start.go:296] duration metric: took 145.133244ms for postStartSetup
	I1003 18:48:06.133546   83697 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:48:06.133640   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:06.151048   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:06.247794   83697 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:48:06.252196   83697 fix.go:56] duration metric: took 4.715699614s for fixHost
	I1003 18:48:06.252229   83697 start.go:83] releasing machines lock for "ha-422561", held for 4.715747117s
	I1003 18:48:06.252292   83697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:48:06.269719   83697 ssh_runner.go:195] Run: cat /version.json
	I1003 18:48:06.269776   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:06.269848   83697 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:48:06.269925   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:06.287309   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:06.288536   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:06.440444   83697 ssh_runner.go:195] Run: systemctl --version
	I1003 18:48:06.446644   83697 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 18:48:06.480099   83697 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 18:48:06.484552   83697 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 18:48:06.484620   83697 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 18:48:06.492151   83697 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1003 18:48:06.492174   83697 start.go:495] detecting cgroup driver to use...
	I1003 18:48:06.492207   83697 detect.go:190] detected "systemd" cgroup driver on host os
	I1003 18:48:06.492242   83697 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 18:48:06.505874   83697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 18:48:06.518096   83697 docker.go:218] disabling cri-docker service (if available) ...
	I1003 18:48:06.518153   83697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 18:48:06.532038   83697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 18:48:06.543572   83697 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 18:48:06.619047   83697 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 18:48:06.695631   83697 docker.go:234] disabling docker service ...
	I1003 18:48:06.695709   83697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 18:48:06.709304   83697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 18:48:06.720766   83697 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 18:48:06.794255   83697 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 18:48:06.872577   83697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 18:48:06.884756   83697 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:48:06.898431   83697 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 18:48:06.898497   83697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.907185   83697 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 18:48:06.907288   83697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.915650   83697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.923921   83697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.932255   83697 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:48:06.939698   83697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.948130   83697 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.955875   83697 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.963958   83697 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:48:06.970620   83697 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:48:06.977236   83697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:48:07.055447   83697 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 18:48:07.158344   83697 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 18:48:07.158401   83697 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 18:48:07.162236   83697 start.go:563] Will wait 60s for crictl version
	I1003 18:48:07.162283   83697 ssh_runner.go:195] Run: which crictl
	I1003 18:48:07.165713   83697 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 18:48:07.189610   83697 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 18:48:07.189696   83697 ssh_runner.go:195] Run: crio --version
	I1003 18:48:07.216037   83697 ssh_runner.go:195] Run: crio --version
	I1003 18:48:07.243602   83697 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 18:48:07.244835   83697 cli_runner.go:164] Run: docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:48:07.261059   83697 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 18:48:07.264966   83697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:48:07.274777   83697 kubeadm.go:883] updating cluster {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 18:48:07.274871   83697 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:48:07.275110   83697 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:48:07.306722   83697 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:48:07.306745   83697 crio.go:433] Images already preloaded, skipping extraction
	I1003 18:48:07.306802   83697 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:48:07.331000   83697 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:48:07.331023   83697 cache_images.go:85] Images are preloaded, skipping loading
	I1003 18:48:07.331031   83697 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1003 18:48:07.331136   83697 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422561 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 18:48:07.331212   83697 ssh_runner.go:195] Run: crio config
	I1003 18:48:07.375866   83697 cni.go:84] Creating CNI manager for ""
	I1003 18:48:07.375888   83697 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 18:48:07.375910   83697 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 18:48:07.375937   83697 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-422561 NodeName:ha-422561 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 18:48:07.376106   83697 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-422561"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:48:07.376177   83697 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 18:48:07.383986   83697 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:48:07.384055   83697 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 18:48:07.391187   83697 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1003 18:48:07.403399   83697 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 18:48:07.414754   83697 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1003 18:48:07.426847   83697 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1003 18:48:07.430235   83697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:48:07.439401   83697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:48:07.516381   83697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:48:07.538237   83697 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561 for IP: 192.168.49.2
	I1003 18:48:07.538255   83697 certs.go:195] generating shared ca certs ...
	I1003 18:48:07.538271   83697 certs.go:227] acquiring lock for ca certs: {Name:mk92d1e8e469cb44d9924ff8abf5ecf0a8ce4e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:48:07.538437   83697 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key
	I1003 18:48:07.538512   83697 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key
	I1003 18:48:07.538528   83697 certs.go:257] generating profile certs ...
	I1003 18:48:07.538625   83697 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key
	I1003 18:48:07.538704   83697 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2ce2e456
	I1003 18:48:07.538754   83697 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key
	I1003 18:48:07.538768   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 18:48:07.538784   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 18:48:07.538800   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 18:48:07.538816   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 18:48:07.538835   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 18:48:07.538852   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 18:48:07.538868   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 18:48:07.538885   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 18:48:07.539018   83697 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem (1338 bytes)
	W1003 18:48:07.539063   83697 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212_empty.pem, impossibly tiny 0 bytes
	I1003 18:48:07.539074   83697 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:48:07.539115   83697 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:48:07.539150   83697 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:48:07.539179   83697 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem (1675 bytes)
	I1003 18:48:07.539234   83697 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:48:07.539276   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem -> /usr/share/ca-certificates/12212.pem
	I1003 18:48:07.539296   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /usr/share/ca-certificates/122122.pem
	I1003 18:48:07.539321   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:48:07.540071   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:48:07.557965   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 18:48:07.575458   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:48:07.593468   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:48:07.615468   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1003 18:48:07.632748   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:48:07.648762   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:48:07.664587   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 18:48:07.680650   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem --> /usr/share/ca-certificates/12212.pem (1338 bytes)
	I1003 18:48:07.696584   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /usr/share/ca-certificates/122122.pem (1708 bytes)
	I1003 18:48:07.712414   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:48:07.729163   83697 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:48:07.740601   83697 ssh_runner.go:195] Run: openssl version
	I1003 18:48:07.746326   83697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12212.pem && ln -fs /usr/share/ca-certificates/12212.pem /etc/ssl/certs/12212.pem"
	I1003 18:48:07.754771   83697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12212.pem
	I1003 18:48:07.758126   83697 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:48:07.758166   83697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12212.pem
	I1003 18:48:07.791672   83697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12212.pem /etc/ssl/certs/51391683.0"
	I1003 18:48:07.799482   83697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122122.pem && ln -fs /usr/share/ca-certificates/122122.pem /etc/ssl/certs/122122.pem"
	I1003 18:48:07.807556   83697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122122.pem
	I1003 18:48:07.811134   83697 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:48:07.811185   83697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122122.pem
	I1003 18:48:07.844703   83697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122122.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:48:07.852290   83697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:48:07.859877   83697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:48:07.863389   83697 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:48:07.863436   83697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:48:07.897292   83697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:48:07.905487   83697 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:48:07.909431   83697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 18:48:07.943717   83697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 18:48:07.977826   83697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 18:48:08.011227   83697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 18:48:08.050549   83697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 18:48:08.092515   83697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 18:48:08.127614   83697 kubeadm.go:400] StartCluster: {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:48:08.127701   83697 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:48:08.127742   83697 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:48:08.154681   83697 cri.go:89] found id: ""
	I1003 18:48:08.154738   83697 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:48:08.162929   83697 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1003 18:48:08.162947   83697 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1003 18:48:08.163014   83697 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 18:48:08.169965   83697 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:48:08.170348   83697 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:48:08.170445   83697 kubeconfig.go:62] /home/jenkins/minikube-integration/21625-8669/kubeconfig needs updating (will repair): [kubeconfig missing "ha-422561" cluster setting kubeconfig missing "ha-422561" context setting]
	I1003 18:48:08.170662   83697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/kubeconfig: {Name:mk6b7939515483ba69c1f358a3a21494f4ead7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:48:08.171209   83697 kapi.go:59] client config for ha-422561: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key", CAFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 18:48:08.171603   83697 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1003 18:48:08.171622   83697 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1003 18:48:08.171626   83697 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1003 18:48:08.171630   83697 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1003 18:48:08.171635   83697 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1003 18:48:08.171700   83697 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1003 18:48:08.172024   83697 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 18:48:08.179145   83697 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1003 18:48:08.179168   83697 kubeadm.go:601] duration metric: took 16.215128ms to restartPrimaryControlPlane
	I1003 18:48:08.179177   83697 kubeadm.go:402] duration metric: took 51.569431ms to StartCluster
	I1003 18:48:08.179192   83697 settings.go:142] acquiring lock: {Name:mk6bc950503a8f341b8aacc07a8bc72d5db3a25c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:48:08.179256   83697 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:48:08.179754   83697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/kubeconfig: {Name:mk6b7939515483ba69c1f358a3a21494f4ead7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:48:08.179960   83697 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 18:48:08.180005   83697 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 18:48:08.180077   83697 addons.go:69] Setting storage-provisioner=true in profile "ha-422561"
	I1003 18:48:08.180096   83697 addons.go:238] Setting addon storage-provisioner=true in "ha-422561"
	I1003 18:48:08.180126   83697 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:48:08.180143   83697 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:48:08.180118   83697 addons.go:69] Setting default-storageclass=true in profile "ha-422561"
	I1003 18:48:08.180191   83697 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-422561"
	I1003 18:48:08.180383   83697 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:48:08.180572   83697 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:48:08.183165   83697 out.go:179] * Verifying Kubernetes components...
	I1003 18:48:08.184503   83697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:48:08.199461   83697 kapi.go:59] client config for ha-422561: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key", CAFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 18:48:08.199832   83697 addons.go:238] Setting addon default-storageclass=true in "ha-422561"
	I1003 18:48:08.199880   83697 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:48:08.200383   83697 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:48:08.200811   83697 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 18:48:08.202643   83697 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:48:08.202664   83697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 18:48:08.202713   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:08.226707   83697 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 18:48:08.226733   83697 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 18:48:08.226796   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:08.227638   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:08.244287   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:08.283745   83697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:48:08.296260   83697 node_ready.go:35] waiting up to 6m0s for node "ha-422561" to be "Ready" ...
	I1003 18:48:08.335656   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:48:08.351120   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:08.389710   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:08.389751   83697 retry.go:31] will retry after 328.107449ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:08.404951   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:08.404995   83697 retry.go:31] will retry after 321.741218ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:08.718445   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:48:08.726854   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:08.773648   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:08.773686   83697 retry.go:31] will retry after 472.06094ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:08.777934   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:08.777965   83697 retry.go:31] will retry after 427.725934ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:09.205852   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:48:09.246423   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:09.258516   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:09.258554   83697 retry.go:31] will retry after 827.773787ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:09.299212   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:09.299244   83697 retry.go:31] will retry after 477.48466ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:09.776942   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:09.826781   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:09.826812   83697 retry.go:31] will retry after 1.085146889s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:10.087227   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:10.137943   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:10.137973   83697 retry.go:31] will retry after 739.377919ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:10.297625   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:10.877756   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:48:10.912311   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:10.929140   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:10.929175   83697 retry.go:31] will retry after 1.497643033s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:10.963566   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:10.963603   83697 retry.go:31] will retry after 713.576365ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:11.678080   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:11.729368   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:11.729399   83697 retry.go:31] will retry after 2.048730039s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:12.427099   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:12.477658   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:12.477701   83697 retry.go:31] will retry after 2.498808401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:12.797484   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:13.779038   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:13.830173   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:13.830204   83697 retry.go:31] will retry after 4.102789416s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:14.977444   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:15.028118   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:15.028144   83697 retry.go:31] will retry after 2.619354281s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:15.296814   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:17.296893   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:17.648338   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:17.699440   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:17.699475   83697 retry.go:31] will retry after 4.509399124s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:17.933252   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:17.983755   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:17.983783   83697 retry.go:31] will retry after 5.633518758s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:19.297715   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:21.797697   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:22.209174   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:22.259804   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:22.259835   83697 retry.go:31] will retry after 5.445935062s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:23.618051   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:23.669865   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:23.669892   83697 retry.go:31] will retry after 8.812204221s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:24.297645   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:26.796887   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:27.706519   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:27.757124   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:27.757152   83697 retry.go:31] will retry after 10.217471518s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:29.296865   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:31.797282   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:32.482714   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:32.535080   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:32.535111   83697 retry.go:31] will retry after 6.964681944s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:34.297049   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:36.297155   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:37.974824   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:38.025602   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:38.025636   83697 retry.go:31] will retry after 18.172547929s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:38.297586   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:39.499928   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:39.551482   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:39.551509   83697 retry.go:31] will retry after 10.529315365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:40.297633   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:42.796931   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:44.797268   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:46.797590   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:49.296867   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:50.081207   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:50.133196   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:50.133222   83697 retry.go:31] will retry after 12.42585121s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:51.296943   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:53.297831   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:55.796917   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:56.198392   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:56.249657   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:56.249700   83697 retry.go:31] will retry after 29.529741997s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:57.797326   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:00.297226   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:02.297421   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:49:02.559843   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:49:02.612999   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:49:02.613029   83697 retry.go:31] will retry after 27.551629332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:49:04.797075   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:06.797507   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:09.297080   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:11.297269   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:13.796944   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:15.797079   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:17.797368   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:19.797700   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:21.797785   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:24.296940   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:49:25.779700   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:49:25.831805   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:49:25.831933   83697 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1003 18:49:26.796936   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:28.797330   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:49:30.164992   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:49:30.215742   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:49:30.215772   83697 retry.go:31] will retry after 28.778272146s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:49:30.797426   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:33.296941   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:35.297159   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:37.297417   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:39.297817   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:41.796863   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:44.296913   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:46.796856   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:48.797475   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:50.797629   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:53.296889   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:55.796908   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:57.797151   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:49:58.994596   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:49:59.046263   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:49:59.046378   83697 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1003 18:49:59.048398   83697 out.go:179] * Enabled addons: 
	I1003 18:49:59.049773   83697 addons.go:514] duration metric: took 1m50.869773501s for enable addons: enabled=[]
	W1003 18:50:00.296924   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:02.297548   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:04.797690   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:07.297348   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:09.297437   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:11.797512   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:14.297319   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:16.797104   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:19.296854   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:21.297701   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:23.297802   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:25.297849   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:27.797780   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:30.297741   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:32.797494   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:35.297010   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:37.797828   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:40.297711   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:42.797757   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:45.297560   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:47.297687   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:49.797412   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:51.797571   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:54.297548   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:56.797814   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:59.296806   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:01.297710   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:03.797647   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:05.797831   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:08.296870   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:10.297744   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:12.797784   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:15.297698   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:17.797688   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:19.797840   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:22.296774   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:24.297664   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:26.797683   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:29.297653   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:31.797617   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:34.297512   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:36.297549   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:38.797789   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:41.297808   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:43.797711   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:46.297515   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:48.297601   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:50.797480   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:52.797630   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:55.297518   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:57.297610   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:59.797566   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:01.797845   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:04.297723   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:06.797751   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:09.296900   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:11.297073   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:13.797101   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:16.296892   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:18.297089   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:20.297441   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:22.297830   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:24.797001   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:26.797103   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:28.797309   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:30.797733   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:33.296890   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:35.297023   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:37.297485   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:39.796821   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:41.796908   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:44.297519   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:46.297801   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:48.797226   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:50.797520   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:53.297667   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:55.796948   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:57.797147   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:59.797195   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:01.797398   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:03.797694   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:06.296852   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:08.297011   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:10.297275   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:12.796835   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:14.797025   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:16.797428   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:18.797693   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:21.296967   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:23.297118   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:25.297443   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:27.796863   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:29.796896   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:31.797097   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:33.797406   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:36.296856   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:38.297182   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:40.297561   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:42.796949   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:44.797310   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:46.797517   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:49.296798   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:51.796965   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:53.797416   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:56.296843   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:58.297143   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:54:00.297294   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:54:02.297496   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:54:04.797414   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:54:07.296848   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:54:08.296599   83697 node_ready.go:38] duration metric: took 6m0.000289942s for node "ha-422561" to be "Ready" ...
	I1003 18:54:08.298641   83697 out.go:203] 
	W1003 18:54:08.300195   83697 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1003 18:54:08.300213   83697 out.go:285] * 
	* 
	W1003 18:54:08.301827   83697 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:54:08.303083   83697 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-linux-amd64 -p ha-422561 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-422561
helpers_test.go:243: (dbg) docker inspect ha-422561:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	        "Created": "2025-10-03T18:31:00.396132938Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 83894,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T18:48:01.584921869Z",
	            "FinishedAt": "2025-10-03T18:48:00.240128679Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hostname",
	        "HostsPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hosts",
	        "LogPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512-json.log",
	        "Name": "/ha-422561",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-422561:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-422561",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	                "LowerDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94-init/diff:/var/lib/docker/overlay2/6a517a7375440eba803d7b83fe1e0821915758396dd4d8556ab64fff322a60c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-422561",
	                "Source": "/var/lib/docker/volumes/ha-422561/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-422561",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-422561",
	                "name.minikube.sigs.k8s.io": "ha-422561",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b7bc183f57948a25d46552eb6c438fe564ed77e2518bcbeb88c2428dc903e44c",
	            "SandboxKey": "/var/run/docker/netns/b7bc183f5794",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32793"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32794"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32797"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32795"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32796"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-422561": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:4a:c7:54:b6:6a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de6aa7ca29f453c0d15cb280abde7ee215f554c89e78e3db8a0f7590468114b5",
	                    "EndpointID": "3c59a4bfdbcc71d01f483fb97819fde7e13586cafec98410913d5f8c234327ac",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-422561",
	                        "eef8fc426b2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561: exit status 2 (306.130481ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node add --alsologtostderr -v 5                                                    │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node stop m02 --alsologtostderr -v 5                                               │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node start m02 --alsologtostderr -v 5                                              │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:41 UTC │                     │
	│ node    │ ha-422561 node list --alsologtostderr -v 5                                                   │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:41 UTC │                     │
	│ stop    │ ha-422561 stop --alsologtostderr -v 5                                                        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:41 UTC │ 03 Oct 25 18:41 UTC │
	│ start   │ ha-422561 start --wait true --alsologtostderr -v 5                                           │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:41 UTC │                     │
	│ node    │ ha-422561 node list --alsologtostderr -v 5                                                   │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:47 UTC │                     │
	│ node    │ ha-422561 node delete m03 --alsologtostderr -v 5                                             │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:47 UTC │                     │
	│ stop    │ ha-422561 stop --alsologtostderr -v 5                                                        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:47 UTC │ 03 Oct 25 18:48 UTC │
	│ start   │ ha-422561 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:48 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:48:01
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:48:01.358006   83697 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:48:01.358289   83697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:48:01.358300   83697 out.go:374] Setting ErrFile to fd 2...
	I1003 18:48:01.358305   83697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:48:01.358536   83697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:48:01.358996   83697 out.go:368] Setting JSON to false
	I1003 18:48:01.359863   83697 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5432,"bootTime":1759511849,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:48:01.359957   83697 start.go:140] virtualization: kvm guest
	I1003 18:48:01.362210   83697 out.go:179] * [ha-422561] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 18:48:01.363666   83697 notify.go:220] Checking for updates...
	I1003 18:48:01.363675   83697 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:48:01.365090   83697 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:48:01.366363   83697 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:48:01.367623   83697 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:48:01.368893   83697 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:48:01.370300   83697 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:48:01.372005   83697 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:48:01.372415   83697 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:48:01.396617   83697 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:48:01.396706   83697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:48:01.448802   83697 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:48:01.439437332 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:48:01.448910   83697 docker.go:318] overlay module found
	I1003 18:48:01.450884   83697 out.go:179] * Using the docker driver based on existing profile
	I1003 18:48:01.452231   83697 start.go:304] selected driver: docker
	I1003 18:48:01.452246   83697 start.go:924] validating driver "docker" against &{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:48:01.452322   83697 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:48:01.452405   83697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:48:01.509159   83697 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:48:01.498948046 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:48:01.509757   83697 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:48:01.509786   83697 cni.go:84] Creating CNI manager for ""
	I1003 18:48:01.509833   83697 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 18:48:01.509876   83697 start.go:348] cluster config:
	{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1003 18:48:01.511871   83697 out.go:179] * Starting "ha-422561" primary control-plane node in "ha-422561" cluster
	I1003 18:48:01.513298   83697 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:48:01.514481   83697 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:48:01.515584   83697 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:48:01.515621   83697 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:48:01.515631   83697 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 18:48:01.515642   83697 cache.go:58] Caching tarball of preloaded images
	I1003 18:48:01.515725   83697 preload.go:233] Found /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1003 18:48:01.515744   83697 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 18:48:01.515874   83697 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:48:01.536348   83697 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 18:48:01.536367   83697 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 18:48:01.536383   83697 cache.go:232] Successfully downloaded all kic artifacts
	I1003 18:48:01.536411   83697 start.go:360] acquireMachinesLock for ha-422561: {Name:mk32fd04a5d9b5f89831583bab7d7527f4d187a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:48:01.536466   83697 start.go:364] duration metric: took 37.424µs to acquireMachinesLock for "ha-422561"
	I1003 18:48:01.536482   83697 start.go:96] Skipping create...Using existing machine configuration
	I1003 18:48:01.536489   83697 fix.go:54] fixHost starting: 
	I1003 18:48:01.536680   83697 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:48:01.553807   83697 fix.go:112] recreateIfNeeded on ha-422561: state=Stopped err=<nil>
	W1003 18:48:01.553839   83697 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 18:48:01.555613   83697 out.go:252] * Restarting existing docker container for "ha-422561" ...
	I1003 18:48:01.555684   83697 cli_runner.go:164] Run: docker start ha-422561
	I1003 18:48:01.796448   83697 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:48:01.815210   83697 kic.go:430] container "ha-422561" state is running.
	I1003 18:48:01.815590   83697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:48:01.834439   83697 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:48:01.834700   83697 machine.go:93] provisionDockerMachine start ...
	I1003 18:48:01.834770   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:01.852545   83697 main.go:141] libmachine: Using SSH client type: native
	I1003 18:48:01.852799   83697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1003 18:48:01.852812   83697 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 18:48:01.853394   83697 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49686->127.0.0.1:32793: read: connection reset by peer
	I1003 18:48:04.996743   83697 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:48:04.996769   83697 ubuntu.go:182] provisioning hostname "ha-422561"
	I1003 18:48:04.996830   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:05.013852   83697 main.go:141] libmachine: Using SSH client type: native
	I1003 18:48:05.014117   83697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1003 18:48:05.014132   83697 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-422561 && echo "ha-422561" | sudo tee /etc/hostname
	I1003 18:48:05.165019   83697 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:48:05.165102   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:05.183718   83697 main.go:141] libmachine: Using SSH client type: native
	I1003 18:48:05.183927   83697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1003 18:48:05.183944   83697 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422561' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422561/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422561' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:48:05.326262   83697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:48:05.326300   83697 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8669/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8669/.minikube}
	I1003 18:48:05.326346   83697 ubuntu.go:190] setting up certificates
	I1003 18:48:05.326359   83697 provision.go:84] configureAuth start
	I1003 18:48:05.326433   83697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:48:05.343930   83697 provision.go:143] copyHostCerts
	I1003 18:48:05.343993   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:48:05.344029   83697 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem, removing ...
	I1003 18:48:05.344046   83697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:48:05.344123   83697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem (1082 bytes)
	I1003 18:48:05.344224   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:48:05.344246   83697 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem, removing ...
	I1003 18:48:05.344254   83697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:48:05.344285   83697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem (1123 bytes)
	I1003 18:48:05.344349   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:48:05.344369   83697 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem, removing ...
	I1003 18:48:05.344376   83697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:48:05.344403   83697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem (1675 bytes)
	I1003 18:48:05.344471   83697 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem org=jenkins.ha-422561 san=[127.0.0.1 192.168.49.2 ha-422561 localhost minikube]
	I1003 18:48:05.548175   83697 provision.go:177] copyRemoteCerts
	I1003 18:48:05.548237   83697 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:48:05.548272   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:05.565560   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:05.665910   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 18:48:05.665989   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 18:48:05.683091   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 18:48:05.683139   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1003 18:48:05.699514   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 18:48:05.699586   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 18:48:05.716017   83697 provision.go:87] duration metric: took 389.640217ms to configureAuth
	I1003 18:48:05.716044   83697 ubuntu.go:206] setting minikube options for container-runtime
	I1003 18:48:05.716221   83697 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:48:05.716337   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:05.735187   83697 main.go:141] libmachine: Using SSH client type: native
	I1003 18:48:05.735436   83697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1003 18:48:05.735459   83697 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 18:48:05.988283   83697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 18:48:05.988310   83697 machine.go:96] duration metric: took 4.153593591s to provisionDockerMachine
	I1003 18:48:05.988321   83697 start.go:293] postStartSetup for "ha-422561" (driver="docker")
	I1003 18:48:05.988333   83697 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:48:05.988396   83697 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:48:05.988435   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:06.005743   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:06.106231   83697 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:48:06.109622   83697 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:48:06.109647   83697 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 18:48:06.109656   83697 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/addons for local assets ...
	I1003 18:48:06.109722   83697 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/files for local assets ...
	I1003 18:48:06.109816   83697 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> 122122.pem in /etc/ssl/certs
	I1003 18:48:06.109829   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /etc/ssl/certs/122122.pem
	I1003 18:48:06.109949   83697 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 18:48:06.117171   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:48:06.133466   83697 start.go:296] duration metric: took 145.133244ms for postStartSetup
	I1003 18:48:06.133546   83697 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:48:06.133640   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:06.151048   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:06.247794   83697 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:48:06.252196   83697 fix.go:56] duration metric: took 4.715699614s for fixHost
	I1003 18:48:06.252229   83697 start.go:83] releasing machines lock for "ha-422561", held for 4.715747117s
	I1003 18:48:06.252292   83697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:48:06.269719   83697 ssh_runner.go:195] Run: cat /version.json
	I1003 18:48:06.269776   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:06.269848   83697 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:48:06.269925   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:06.287309   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:06.288536   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:06.440444   83697 ssh_runner.go:195] Run: systemctl --version
	I1003 18:48:06.446644   83697 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 18:48:06.480099   83697 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 18:48:06.484552   83697 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 18:48:06.484620   83697 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 18:48:06.492151   83697 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1003 18:48:06.492174   83697 start.go:495] detecting cgroup driver to use...
	I1003 18:48:06.492207   83697 detect.go:190] detected "systemd" cgroup driver on host os
	I1003 18:48:06.492242   83697 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 18:48:06.505874   83697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 18:48:06.518096   83697 docker.go:218] disabling cri-docker service (if available) ...
	I1003 18:48:06.518153   83697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 18:48:06.532038   83697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 18:48:06.543572   83697 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 18:48:06.619047   83697 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 18:48:06.695631   83697 docker.go:234] disabling docker service ...
	I1003 18:48:06.695709   83697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 18:48:06.709304   83697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 18:48:06.720766   83697 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 18:48:06.794255   83697 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 18:48:06.872577   83697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 18:48:06.884756   83697 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:48:06.898431   83697 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 18:48:06.898497   83697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.907185   83697 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 18:48:06.907288   83697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.915650   83697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.923921   83697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.932255   83697 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:48:06.939698   83697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.948130   83697 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.955875   83697 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.963958   83697 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:48:06.970620   83697 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:48:06.977236   83697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:48:07.055447   83697 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 18:48:07.158344   83697 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 18:48:07.158401   83697 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 18:48:07.162236   83697 start.go:563] Will wait 60s for crictl version
	I1003 18:48:07.162283   83697 ssh_runner.go:195] Run: which crictl
	I1003 18:48:07.165713   83697 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 18:48:07.189610   83697 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 18:48:07.189696   83697 ssh_runner.go:195] Run: crio --version
	I1003 18:48:07.216037   83697 ssh_runner.go:195] Run: crio --version
	I1003 18:48:07.243602   83697 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 18:48:07.244835   83697 cli_runner.go:164] Run: docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:48:07.261059   83697 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 18:48:07.264966   83697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:48:07.274777   83697 kubeadm.go:883] updating cluster {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 18:48:07.274871   83697 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:48:07.275110   83697 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:48:07.306722   83697 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:48:07.306745   83697 crio.go:433] Images already preloaded, skipping extraction
	I1003 18:48:07.306802   83697 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:48:07.331000   83697 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:48:07.331023   83697 cache_images.go:85] Images are preloaded, skipping loading
	I1003 18:48:07.331031   83697 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1003 18:48:07.331136   83697 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422561 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 18:48:07.331212   83697 ssh_runner.go:195] Run: crio config
	I1003 18:48:07.375866   83697 cni.go:84] Creating CNI manager for ""
	I1003 18:48:07.375888   83697 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 18:48:07.375910   83697 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 18:48:07.375937   83697 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-422561 NodeName:ha-422561 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 18:48:07.376106   83697 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-422561"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:48:07.376177   83697 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 18:48:07.383986   83697 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:48:07.384055   83697 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 18:48:07.391187   83697 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1003 18:48:07.403399   83697 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 18:48:07.414754   83697 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1003 18:48:07.426847   83697 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1003 18:48:07.430235   83697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:48:07.439401   83697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:48:07.516381   83697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:48:07.538237   83697 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561 for IP: 192.168.49.2
	I1003 18:48:07.538255   83697 certs.go:195] generating shared ca certs ...
	I1003 18:48:07.538271   83697 certs.go:227] acquiring lock for ca certs: {Name:mk92d1e8e469cb44d9924ff8abf5ecf0a8ce4e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:48:07.538437   83697 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key
	I1003 18:48:07.538512   83697 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key
	I1003 18:48:07.538528   83697 certs.go:257] generating profile certs ...
	I1003 18:48:07.538625   83697 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key
	I1003 18:48:07.538704   83697 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2ce2e456
	I1003 18:48:07.538754   83697 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key
	I1003 18:48:07.538768   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 18:48:07.538784   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 18:48:07.538800   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 18:48:07.538816   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 18:48:07.538835   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 18:48:07.538852   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 18:48:07.538868   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 18:48:07.538885   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 18:48:07.539018   83697 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem (1338 bytes)
	W1003 18:48:07.539063   83697 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212_empty.pem, impossibly tiny 0 bytes
	I1003 18:48:07.539074   83697 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:48:07.539115   83697 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:48:07.539150   83697 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:48:07.539179   83697 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem (1675 bytes)
	I1003 18:48:07.539234   83697 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:48:07.539276   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem -> /usr/share/ca-certificates/12212.pem
	I1003 18:48:07.539296   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /usr/share/ca-certificates/122122.pem
	I1003 18:48:07.539321   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:48:07.540071   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:48:07.557965   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 18:48:07.575458   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:48:07.593468   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:48:07.615468   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1003 18:48:07.632748   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:48:07.648762   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:48:07.664587   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 18:48:07.680650   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem --> /usr/share/ca-certificates/12212.pem (1338 bytes)
	I1003 18:48:07.696584   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /usr/share/ca-certificates/122122.pem (1708 bytes)
	I1003 18:48:07.712414   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:48:07.729163   83697 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:48:07.740601   83697 ssh_runner.go:195] Run: openssl version
	I1003 18:48:07.746326   83697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12212.pem && ln -fs /usr/share/ca-certificates/12212.pem /etc/ssl/certs/12212.pem"
	I1003 18:48:07.754771   83697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12212.pem
	I1003 18:48:07.758126   83697 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:48:07.758166   83697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12212.pem
	I1003 18:48:07.791672   83697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12212.pem /etc/ssl/certs/51391683.0"
	I1003 18:48:07.799482   83697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122122.pem && ln -fs /usr/share/ca-certificates/122122.pem /etc/ssl/certs/122122.pem"
	I1003 18:48:07.807556   83697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122122.pem
	I1003 18:48:07.811134   83697 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:48:07.811185   83697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122122.pem
	I1003 18:48:07.844703   83697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122122.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:48:07.852290   83697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:48:07.859877   83697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:48:07.863389   83697 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:48:07.863436   83697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:48:07.897292   83697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:48:07.905487   83697 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:48:07.909431   83697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 18:48:07.943717   83697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 18:48:07.977826   83697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 18:48:08.011227   83697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 18:48:08.050549   83697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 18:48:08.092515   83697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 18:48:08.127614   83697 kubeadm.go:400] StartCluster: {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:48:08.127701   83697 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:48:08.127742   83697 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:48:08.154681   83697 cri.go:89] found id: ""
	I1003 18:48:08.154738   83697 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:48:08.162929   83697 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1003 18:48:08.162947   83697 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1003 18:48:08.163014   83697 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 18:48:08.169965   83697 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:48:08.170348   83697 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:48:08.170445   83697 kubeconfig.go:62] /home/jenkins/minikube-integration/21625-8669/kubeconfig needs updating (will repair): [kubeconfig missing "ha-422561" cluster setting kubeconfig missing "ha-422561" context setting]
	I1003 18:48:08.170662   83697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/kubeconfig: {Name:mk6b7939515483ba69c1f358a3a21494f4ead7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:48:08.171209   83697 kapi.go:59] client config for ha-422561: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key", CAFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 18:48:08.171603   83697 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1003 18:48:08.171622   83697 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1003 18:48:08.171626   83697 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1003 18:48:08.171630   83697 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1003 18:48:08.171635   83697 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1003 18:48:08.171700   83697 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1003 18:48:08.172024   83697 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 18:48:08.179145   83697 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1003 18:48:08.179168   83697 kubeadm.go:601] duration metric: took 16.215128ms to restartPrimaryControlPlane
	I1003 18:48:08.179177   83697 kubeadm.go:402] duration metric: took 51.569431ms to StartCluster
	I1003 18:48:08.179192   83697 settings.go:142] acquiring lock: {Name:mk6bc950503a8f341b8aacc07a8bc72d5db3a25c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:48:08.179256   83697 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:48:08.179754   83697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/kubeconfig: {Name:mk6b7939515483ba69c1f358a3a21494f4ead7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:48:08.179960   83697 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 18:48:08.180005   83697 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 18:48:08.180077   83697 addons.go:69] Setting storage-provisioner=true in profile "ha-422561"
	I1003 18:48:08.180096   83697 addons.go:238] Setting addon storage-provisioner=true in "ha-422561"
	I1003 18:48:08.180126   83697 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:48:08.180143   83697 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:48:08.180118   83697 addons.go:69] Setting default-storageclass=true in profile "ha-422561"
	I1003 18:48:08.180191   83697 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-422561"
	I1003 18:48:08.180383   83697 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:48:08.180572   83697 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:48:08.183165   83697 out.go:179] * Verifying Kubernetes components...
	I1003 18:48:08.184503   83697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:48:08.199461   83697 kapi.go:59] client config for ha-422561: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key", CAFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 18:48:08.199832   83697 addons.go:238] Setting addon default-storageclass=true in "ha-422561"
	I1003 18:48:08.199880   83697 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:48:08.200383   83697 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:48:08.200811   83697 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 18:48:08.202643   83697 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:48:08.202664   83697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 18:48:08.202713   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:08.226707   83697 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 18:48:08.226733   83697 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 18:48:08.226796   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:08.227638   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:08.244287   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:08.283745   83697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:48:08.296260   83697 node_ready.go:35] waiting up to 6m0s for node "ha-422561" to be "Ready" ...
	I1003 18:48:08.335656   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:48:08.351120   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:08.389710   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:08.389751   83697 retry.go:31] will retry after 328.107449ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:08.404951   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:08.404995   83697 retry.go:31] will retry after 321.741218ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:08.718445   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:48:08.726854   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:08.773648   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:08.773686   83697 retry.go:31] will retry after 472.06094ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:08.777934   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:08.777965   83697 retry.go:31] will retry after 427.725934ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:09.205852   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:48:09.246423   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:09.258516   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:09.258554   83697 retry.go:31] will retry after 827.773787ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:09.299212   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:09.299244   83697 retry.go:31] will retry after 477.48466ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:09.776942   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:09.826781   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:09.826812   83697 retry.go:31] will retry after 1.085146889s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:10.087227   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:10.137943   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:10.137973   83697 retry.go:31] will retry after 739.377919ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:10.297625   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:10.877756   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:48:10.912311   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:10.929140   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:10.929175   83697 retry.go:31] will retry after 1.497643033s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:10.963566   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:10.963603   83697 retry.go:31] will retry after 713.576365ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:11.678080   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:11.729368   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:11.729399   83697 retry.go:31] will retry after 2.048730039s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:12.427099   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:12.477658   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:12.477701   83697 retry.go:31] will retry after 2.498808401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:12.797484   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:13.779038   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:13.830173   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:13.830204   83697 retry.go:31] will retry after 4.102789416s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:14.977444   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:15.028118   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:15.028144   83697 retry.go:31] will retry after 2.619354281s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:15.296814   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:17.296893   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:17.648338   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:17.699440   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:17.699475   83697 retry.go:31] will retry after 4.509399124s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:17.933252   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:17.983755   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:17.983783   83697 retry.go:31] will retry after 5.633518758s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:19.297715   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:21.797697   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:22.209174   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:22.259804   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:22.259835   83697 retry.go:31] will retry after 5.445935062s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:23.618051   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:23.669865   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:23.669892   83697 retry.go:31] will retry after 8.812204221s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:24.297645   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:26.796887   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:27.706519   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:27.757124   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:27.757152   83697 retry.go:31] will retry after 10.217471518s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:29.296865   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:31.797282   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:32.482714   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:32.535080   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:32.535111   83697 retry.go:31] will retry after 6.964681944s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:34.297049   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:36.297155   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:37.974824   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:38.025602   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:38.025636   83697 retry.go:31] will retry after 18.172547929s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:38.297586   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:39.499928   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:39.551482   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:39.551509   83697 retry.go:31] will retry after 10.529315365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:40.297633   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:42.796931   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:44.797268   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:46.797590   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:49.296867   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:50.081207   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:50.133196   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:50.133222   83697 retry.go:31] will retry after 12.42585121s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:51.296943   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:53.297831   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:55.796917   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:56.198392   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:56.249657   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:56.249700   83697 retry.go:31] will retry after 29.529741997s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:57.797326   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:00.297226   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:02.297421   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:49:02.559843   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:49:02.612999   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:49:02.613029   83697 retry.go:31] will retry after 27.551629332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:49:04.797075   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:06.797507   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:09.297080   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:11.297269   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:13.796944   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:15.797079   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:17.797368   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:19.797700   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:21.797785   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:24.296940   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:49:25.779700   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:49:25.831805   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:49:25.831933   83697 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1003 18:49:26.796936   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:28.797330   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:49:30.164992   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:49:30.215742   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:49:30.215772   83697 retry.go:31] will retry after 28.778272146s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:49:30.797426   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:33.296941   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:35.297159   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:37.297417   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:39.297817   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:41.796863   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:44.296913   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:46.796856   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:48.797475   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:50.797629   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:53.296889   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:55.796908   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:57.797151   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:49:58.994596   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:49:59.046263   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:49:59.046378   83697 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1003 18:49:59.048398   83697 out.go:179] * Enabled addons: 
	I1003 18:49:59.049773   83697 addons.go:514] duration metric: took 1m50.869773501s for enable addons: enabled=[]
	W1003 18:50:00.296924   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:02.297548   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:04.797690   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:07.297348   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:09.297437   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:11.797512   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:14.297319   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:16.797104   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:19.296854   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:21.297701   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:23.297802   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:25.297849   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:27.797780   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:30.297741   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:32.797494   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:35.297010   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:37.797828   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:40.297711   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:42.797757   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:45.297560   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:47.297687   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:49.797412   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:51.797571   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:54.297548   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:56.797814   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:59.296806   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:01.297710   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:03.797647   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:05.797831   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:08.296870   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:10.297744   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:12.797784   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:15.297698   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:17.797688   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:19.797840   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:22.296774   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:24.297664   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:26.797683   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:29.297653   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:31.797617   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:34.297512   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:36.297549   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:38.797789   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:41.297808   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:43.797711   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:46.297515   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:48.297601   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:50.797480   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:52.797630   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:55.297518   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:57.297610   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:59.797566   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:01.797845   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:04.297723   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:06.797751   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:09.296900   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:11.297073   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:13.797101   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:16.296892   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:18.297089   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:20.297441   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:22.297830   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:24.797001   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:26.797103   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:28.797309   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:30.797733   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:33.296890   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:35.297023   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:37.297485   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:39.796821   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:41.796908   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:44.297519   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:46.297801   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:48.797226   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:50.797520   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:53.297667   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:55.796948   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:57.797147   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:59.797195   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:01.797398   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:03.797694   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:06.296852   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:08.297011   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:10.297275   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:12.796835   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:14.797025   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:16.797428   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:18.797693   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:21.296967   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:23.297118   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:25.297443   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:27.796863   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:29.796896   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:31.797097   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:33.797406   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:36.296856   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:38.297182   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:40.297561   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:42.796949   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:44.797310   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:46.797517   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:49.296798   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:51.796965   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:53.797416   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:56.296843   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:58.297143   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:54:00.297294   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:54:02.297496   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:54:04.797414   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:54:07.296848   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:54:08.296599   83697 node_ready.go:38] duration metric: took 6m0.000289942s for node "ha-422561" to be "Ready" ...
	I1003 18:54:08.298641   83697 out.go:203] 
	W1003 18:54:08.300195   83697 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1003 18:54:08.300213   83697 out.go:285] * 
	W1003 18:54:08.301827   83697 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:54:08.303083   83697 out.go:203] 
	
	
	==> CRI-O <==
	Oct 03 18:54:01 ha-422561 crio[520]: time="2025-10-03T18:54:01.647492088Z" level=info msg="createCtr: removing container d70706743f8cfa17803087c141806379a0de89bfcfcb47168ddb9374db00f8f6" id=a9149373-99de-4abe-ad82-5e773bb9ceac name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:01 ha-422561 crio[520]: time="2025-10-03T18:54:01.647524989Z" level=info msg="createCtr: deleting container d70706743f8cfa17803087c141806379a0de89bfcfcb47168ddb9374db00f8f6 from storage" id=a9149373-99de-4abe-ad82-5e773bb9ceac name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:01 ha-422561 crio[520]: time="2025-10-03T18:54:01.649821528Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-422561_kube-system_6ecf19dd95945fcfeaff027fad95c1ee_0" id=a9149373-99de-4abe-ad82-5e773bb9ceac name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:03 ha-422561 crio[520]: time="2025-10-03T18:54:03.621357476Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=30fb286e-ce62-4adc-9d85-e87ee004d256 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:54:03 ha-422561 crio[520]: time="2025-10-03T18:54:03.622249068Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=8eba2baf-c9ce-4756-b2ac-cf5748579e90 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:54:03 ha-422561 crio[520]: time="2025-10-03T18:54:03.623085958Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-422561/kube-scheduler" id=e2773dc5-e5b4-40f0-85ce-9ba6d287055f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:03 ha-422561 crio[520]: time="2025-10-03T18:54:03.623277343Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:54:03 ha-422561 crio[520]: time="2025-10-03T18:54:03.627317597Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:54:03 ha-422561 crio[520]: time="2025-10-03T18:54:03.62773922Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:54:03 ha-422561 crio[520]: time="2025-10-03T18:54:03.645699092Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=e2773dc5-e5b4-40f0-85ce-9ba6d287055f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:03 ha-422561 crio[520]: time="2025-10-03T18:54:03.647044315Z" level=info msg="createCtr: deleting container ID f7a34ef2837124c4149de511b8e4b8763d42ab1cc1b34ad4e960590c9eece03f from idIndex" id=e2773dc5-e5b4-40f0-85ce-9ba6d287055f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:03 ha-422561 crio[520]: time="2025-10-03T18:54:03.647082514Z" level=info msg="createCtr: removing container f7a34ef2837124c4149de511b8e4b8763d42ab1cc1b34ad4e960590c9eece03f" id=e2773dc5-e5b4-40f0-85ce-9ba6d287055f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:03 ha-422561 crio[520]: time="2025-10-03T18:54:03.647112111Z" level=info msg="createCtr: deleting container f7a34ef2837124c4149de511b8e4b8763d42ab1cc1b34ad4e960590c9eece03f from storage" id=e2773dc5-e5b4-40f0-85ce-9ba6d287055f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:03 ha-422561 crio[520]: time="2025-10-03T18:54:03.649207319Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-422561_kube-system_2640157afe5e174d7402164688eed7be_0" id=e2773dc5-e5b4-40f0-85ce-9ba6d287055f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.621559573Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=92db6541-0ada-48f2-9f54-cf27017442d0 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.622438768Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=53e7ba88-73d6-4add-a407-22c38e727336 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.623409827Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-422561/kube-controller-manager" id=5b433532-0d82-4118-92d3-c661e2ad4431 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.623606545Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.626737821Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.627138756Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.643546137Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=5b433532-0d82-4118-92d3-c661e2ad4431 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.644841463Z" level=info msg="createCtr: deleting container ID bb66ba1f7d85ec39c3f89147d5fb3033ad189b33e5d9ed90c51d047b702b44da from idIndex" id=5b433532-0d82-4118-92d3-c661e2ad4431 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.644890671Z" level=info msg="createCtr: removing container bb66ba1f7d85ec39c3f89147d5fb3033ad189b33e5d9ed90c51d047b702b44da" id=5b433532-0d82-4118-92d3-c661e2ad4431 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.644930862Z" level=info msg="createCtr: deleting container bb66ba1f7d85ec39c3f89147d5fb3033ad189b33e5d9ed90c51d047b702b44da from storage" id=5b433532-0d82-4118-92d3-c661e2ad4431 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.647097207Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-422561_kube-system_e643a03771f1e72f527532eff2c66a9c_0" id=5b433532-0d82-4118-92d3-c661e2ad4431 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:54:09.220085    2019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:54:09.220651    2019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:54:09.222273    2019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:54:09.222733    2019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:54:09.224247    2019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 3 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001870] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.374530] i8042: Warning: Keylock active
	[  +0.010846] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003424] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000658] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000637] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000691] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.479345] block sda: the capability attribute has been deprecated.
	[  +0.086934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025583] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.992810] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:54:09 up  1:36,  0 user,  load average: 0.02, 0.04, 0.07
	Linux ha-422561 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 03 18:54:01 ha-422561 kubelet[673]: E1003 18:54:01.650271     673 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:54:01 ha-422561 kubelet[673]:         container kube-apiserver start failed in pod kube-apiserver-ha-422561_kube-system(6ecf19dd95945fcfeaff027fad95c1ee): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:54:01 ha-422561 kubelet[673]:  > logger="UnhandledError"
	Oct 03 18:54:01 ha-422561 kubelet[673]: E1003 18:54:01.650301     673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-422561" podUID="6ecf19dd95945fcfeaff027fad95c1ee"
	Oct 03 18:54:03 ha-422561 kubelet[673]: E1003 18:54:03.259792     673 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-422561?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 03 18:54:03 ha-422561 kubelet[673]: I1003 18:54:03.422743     673 kubelet_node_status.go:75] "Attempting to register node" node="ha-422561"
	Oct 03 18:54:03 ha-422561 kubelet[673]: E1003 18:54:03.423182     673 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-422561"
	Oct 03 18:54:03 ha-422561 kubelet[673]: E1003 18:54:03.620953     673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:54:03 ha-422561 kubelet[673]: E1003 18:54:03.649492     673 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:54:03 ha-422561 kubelet[673]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:54:03 ha-422561 kubelet[673]:  > podSandboxID="459bcbd68d65b856d4015321db829b9736981ab572101be919e168b1b3785bdb"
	Oct 03 18:54:03 ha-422561 kubelet[673]: E1003 18:54:03.649581     673 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:54:03 ha-422561 kubelet[673]:         container kube-scheduler start failed in pod kube-scheduler-ha-422561_kube-system(2640157afe5e174d7402164688eed7be): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:54:03 ha-422561 kubelet[673]:  > logger="UnhandledError"
	Oct 03 18:54:03 ha-422561 kubelet[673]: E1003 18:54:03.649608     673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-422561" podUID="2640157afe5e174d7402164688eed7be"
	Oct 03 18:54:03 ha-422561 kubelet[673]: E1003 18:54:03.705698     673 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-422561.186b0fa6982c434d  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-422561,UID:ha-422561,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-422561 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-422561,},FirstTimestamp:2025-10-03 18:48:07.610336077 +0000 UTC m=+0.070153337,LastTimestamp:2025-10-03 18:48:07.610336077 +0000 UTC m=+0.070153337,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-422561,}"
	Oct 03 18:54:05 ha-422561 kubelet[673]: E1003 18:54:05.621162     673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:54:05 ha-422561 kubelet[673]: E1003 18:54:05.647355     673 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:54:05 ha-422561 kubelet[673]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:54:05 ha-422561 kubelet[673]:  > podSandboxID="2b327f08e5f0ad594cbcc01662a574beafe6a0fa01e2f506c269716f808713e3"
	Oct 03 18:54:05 ha-422561 kubelet[673]: E1003 18:54:05.647439     673 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:54:05 ha-422561 kubelet[673]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-422561_kube-system(e643a03771f1e72f527532eff2c66a9c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:54:05 ha-422561 kubelet[673]:  > logger="UnhandledError"
	Oct 03 18:54:05 ha-422561 kubelet[673]: E1003 18:54:05.647466     673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-422561" podUID="e643a03771f1e72f527532eff2c66a9c"
	Oct 03 18:54:07 ha-422561 kubelet[673]: E1003 18:54:07.636709     673 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-422561\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561: exit status 2 (297.790461ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-422561" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (368.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-422561" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-422561\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-422561\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-422561\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":nul
l,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list
--output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-422561
helpers_test.go:243: (dbg) docker inspect ha-422561:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	        "Created": "2025-10-03T18:31:00.396132938Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 83894,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T18:48:01.584921869Z",
	            "FinishedAt": "2025-10-03T18:48:00.240128679Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hostname",
	        "HostsPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hosts",
	        "LogPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512-json.log",
	        "Name": "/ha-422561",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-422561:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-422561",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	                "LowerDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94-init/diff:/var/lib/docker/overlay2/6a517a7375440eba803d7b83fe1e0821915758396dd4d8556ab64fff322a60c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-422561",
	                "Source": "/var/lib/docker/volumes/ha-422561/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-422561",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-422561",
	                "name.minikube.sigs.k8s.io": "ha-422561",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b7bc183f57948a25d46552eb6c438fe564ed77e2518bcbeb88c2428dc903e44c",
	            "SandboxKey": "/var/run/docker/netns/b7bc183f5794",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32793"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32794"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32797"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32795"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32796"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-422561": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:4a:c7:54:b6:6a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de6aa7ca29f453c0d15cb280abde7ee215f554c89e78e3db8a0f7590468114b5",
	                    "EndpointID": "3c59a4bfdbcc71d01f483fb97819fde7e13586cafec98410913d5f8c234327ac",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-422561",
	                        "eef8fc426b2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561: exit status 2 (293.726267ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node add --alsologtostderr -v 5                                                    │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node stop m02 --alsologtostderr -v 5                                               │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node start m02 --alsologtostderr -v 5                                              │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:41 UTC │                     │
	│ node    │ ha-422561 node list --alsologtostderr -v 5                                                   │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:41 UTC │                     │
	│ stop    │ ha-422561 stop --alsologtostderr -v 5                                                        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:41 UTC │ 03 Oct 25 18:41 UTC │
	│ start   │ ha-422561 start --wait true --alsologtostderr -v 5                                           │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:41 UTC │                     │
	│ node    │ ha-422561 node list --alsologtostderr -v 5                                                   │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:47 UTC │                     │
	│ node    │ ha-422561 node delete m03 --alsologtostderr -v 5                                             │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:47 UTC │                     │
	│ stop    │ ha-422561 stop --alsologtostderr -v 5                                                        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:47 UTC │ 03 Oct 25 18:48 UTC │
	│ start   │ ha-422561 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:48 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:48:01
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:48:01.358006   83697 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:48:01.358289   83697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:48:01.358300   83697 out.go:374] Setting ErrFile to fd 2...
	I1003 18:48:01.358305   83697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:48:01.358536   83697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:48:01.358996   83697 out.go:368] Setting JSON to false
	I1003 18:48:01.359863   83697 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5432,"bootTime":1759511849,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:48:01.359957   83697 start.go:140] virtualization: kvm guest
	I1003 18:48:01.362210   83697 out.go:179] * [ha-422561] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 18:48:01.363666   83697 notify.go:220] Checking for updates...
	I1003 18:48:01.363675   83697 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:48:01.365090   83697 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:48:01.366363   83697 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:48:01.367623   83697 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:48:01.368893   83697 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:48:01.370300   83697 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:48:01.372005   83697 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:48:01.372415   83697 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:48:01.396617   83697 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:48:01.396706   83697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:48:01.448802   83697 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:48:01.439437332 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:48:01.448910   83697 docker.go:318] overlay module found
	I1003 18:48:01.450884   83697 out.go:179] * Using the docker driver based on existing profile
	I1003 18:48:01.452231   83697 start.go:304] selected driver: docker
	I1003 18:48:01.452246   83697 start.go:924] validating driver "docker" against &{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:48:01.452322   83697 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:48:01.452405   83697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:48:01.509159   83697 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:48:01.498948046 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:48:01.509757   83697 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:48:01.509786   83697 cni.go:84] Creating CNI manager for ""
	I1003 18:48:01.509833   83697 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 18:48:01.509876   83697 start.go:348] cluster config:
	{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1003 18:48:01.511871   83697 out.go:179] * Starting "ha-422561" primary control-plane node in "ha-422561" cluster
	I1003 18:48:01.513298   83697 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:48:01.514481   83697 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:48:01.515584   83697 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:48:01.515621   83697 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:48:01.515631   83697 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 18:48:01.515642   83697 cache.go:58] Caching tarball of preloaded images
	I1003 18:48:01.515725   83697 preload.go:233] Found /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1003 18:48:01.515744   83697 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 18:48:01.515874   83697 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:48:01.536348   83697 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 18:48:01.536367   83697 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 18:48:01.536383   83697 cache.go:232] Successfully downloaded all kic artifacts
	I1003 18:48:01.536411   83697 start.go:360] acquireMachinesLock for ha-422561: {Name:mk32fd04a5d9b5f89831583bab7d7527f4d187a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:48:01.536466   83697 start.go:364] duration metric: took 37.424µs to acquireMachinesLock for "ha-422561"
	I1003 18:48:01.536482   83697 start.go:96] Skipping create...Using existing machine configuration
	I1003 18:48:01.536489   83697 fix.go:54] fixHost starting: 
	I1003 18:48:01.536680   83697 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:48:01.553807   83697 fix.go:112] recreateIfNeeded on ha-422561: state=Stopped err=<nil>
	W1003 18:48:01.553839   83697 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 18:48:01.555613   83697 out.go:252] * Restarting existing docker container for "ha-422561" ...
	I1003 18:48:01.555684   83697 cli_runner.go:164] Run: docker start ha-422561
	I1003 18:48:01.796448   83697 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:48:01.815210   83697 kic.go:430] container "ha-422561" state is running.
	I1003 18:48:01.815590   83697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:48:01.834439   83697 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:48:01.834700   83697 machine.go:93] provisionDockerMachine start ...
	I1003 18:48:01.834770   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:01.852545   83697 main.go:141] libmachine: Using SSH client type: native
	I1003 18:48:01.852799   83697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1003 18:48:01.852812   83697 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 18:48:01.853394   83697 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49686->127.0.0.1:32793: read: connection reset by peer
	I1003 18:48:04.996743   83697 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:48:04.996769   83697 ubuntu.go:182] provisioning hostname "ha-422561"
	I1003 18:48:04.996830   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:05.013852   83697 main.go:141] libmachine: Using SSH client type: native
	I1003 18:48:05.014117   83697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1003 18:48:05.014132   83697 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-422561 && echo "ha-422561" | sudo tee /etc/hostname
	I1003 18:48:05.165019   83697 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:48:05.165102   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:05.183718   83697 main.go:141] libmachine: Using SSH client type: native
	I1003 18:48:05.183927   83697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1003 18:48:05.183944   83697 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422561' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422561/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422561' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:48:05.326262   83697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:48:05.326300   83697 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8669/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8669/.minikube}
	I1003 18:48:05.326346   83697 ubuntu.go:190] setting up certificates
	I1003 18:48:05.326359   83697 provision.go:84] configureAuth start
	I1003 18:48:05.326433   83697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:48:05.343930   83697 provision.go:143] copyHostCerts
	I1003 18:48:05.343993   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:48:05.344029   83697 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem, removing ...
	I1003 18:48:05.344046   83697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:48:05.344123   83697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem (1082 bytes)
	I1003 18:48:05.344224   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:48:05.344246   83697 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem, removing ...
	I1003 18:48:05.344254   83697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:48:05.344285   83697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem (1123 bytes)
	I1003 18:48:05.344349   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:48:05.344369   83697 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem, removing ...
	I1003 18:48:05.344376   83697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:48:05.344403   83697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem (1675 bytes)
	I1003 18:48:05.344471   83697 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem org=jenkins.ha-422561 san=[127.0.0.1 192.168.49.2 ha-422561 localhost minikube]
	I1003 18:48:05.548175   83697 provision.go:177] copyRemoteCerts
	I1003 18:48:05.548237   83697 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:48:05.548272   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:05.565560   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:05.665910   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 18:48:05.665989   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 18:48:05.683091   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 18:48:05.683139   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1003 18:48:05.699514   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 18:48:05.699586   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 18:48:05.716017   83697 provision.go:87] duration metric: took 389.640217ms to configureAuth
	I1003 18:48:05.716044   83697 ubuntu.go:206] setting minikube options for container-runtime
	I1003 18:48:05.716221   83697 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:48:05.716337   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:05.735187   83697 main.go:141] libmachine: Using SSH client type: native
	I1003 18:48:05.735436   83697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1003 18:48:05.735459   83697 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 18:48:05.988283   83697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 18:48:05.988310   83697 machine.go:96] duration metric: took 4.153593591s to provisionDockerMachine
	I1003 18:48:05.988321   83697 start.go:293] postStartSetup for "ha-422561" (driver="docker")
	I1003 18:48:05.988333   83697 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:48:05.988396   83697 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:48:05.988435   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:06.005743   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:06.106231   83697 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:48:06.109622   83697 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:48:06.109647   83697 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 18:48:06.109656   83697 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/addons for local assets ...
	I1003 18:48:06.109722   83697 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/files for local assets ...
	I1003 18:48:06.109816   83697 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> 122122.pem in /etc/ssl/certs
	I1003 18:48:06.109829   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /etc/ssl/certs/122122.pem
	I1003 18:48:06.109949   83697 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 18:48:06.117171   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:48:06.133466   83697 start.go:296] duration metric: took 145.133244ms for postStartSetup
	I1003 18:48:06.133546   83697 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:48:06.133640   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:06.151048   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:06.247794   83697 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:48:06.252196   83697 fix.go:56] duration metric: took 4.715699614s for fixHost
	I1003 18:48:06.252229   83697 start.go:83] releasing machines lock for "ha-422561", held for 4.715747117s
	I1003 18:48:06.252292   83697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:48:06.269719   83697 ssh_runner.go:195] Run: cat /version.json
	I1003 18:48:06.269776   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:06.269848   83697 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:48:06.269925   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:06.287309   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:06.288536   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:06.440444   83697 ssh_runner.go:195] Run: systemctl --version
	I1003 18:48:06.446644   83697 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 18:48:06.480099   83697 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 18:48:06.484552   83697 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 18:48:06.484620   83697 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 18:48:06.492151   83697 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1003 18:48:06.492174   83697 start.go:495] detecting cgroup driver to use...
	I1003 18:48:06.492207   83697 detect.go:190] detected "systemd" cgroup driver on host os
	I1003 18:48:06.492242   83697 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 18:48:06.505874   83697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 18:48:06.518096   83697 docker.go:218] disabling cri-docker service (if available) ...
	I1003 18:48:06.518153   83697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 18:48:06.532038   83697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 18:48:06.543572   83697 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 18:48:06.619047   83697 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 18:48:06.695631   83697 docker.go:234] disabling docker service ...
	I1003 18:48:06.695709   83697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 18:48:06.709304   83697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 18:48:06.720766   83697 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 18:48:06.794255   83697 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 18:48:06.872577   83697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 18:48:06.884756   83697 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:48:06.898431   83697 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 18:48:06.898497   83697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.907185   83697 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 18:48:06.907288   83697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.915650   83697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.923921   83697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.932255   83697 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:48:06.939698   83697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.948130   83697 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.955875   83697 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.963958   83697 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:48:06.970620   83697 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:48:06.977236   83697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:48:07.055447   83697 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 18:48:07.158344   83697 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 18:48:07.158401   83697 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 18:48:07.162236   83697 start.go:563] Will wait 60s for crictl version
	I1003 18:48:07.162283   83697 ssh_runner.go:195] Run: which crictl
	I1003 18:48:07.165713   83697 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 18:48:07.189610   83697 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 18:48:07.189696   83697 ssh_runner.go:195] Run: crio --version
	I1003 18:48:07.216037   83697 ssh_runner.go:195] Run: crio --version
	I1003 18:48:07.243602   83697 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 18:48:07.244835   83697 cli_runner.go:164] Run: docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:48:07.261059   83697 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 18:48:07.264966   83697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:48:07.274777   83697 kubeadm.go:883] updating cluster {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 18:48:07.274871   83697 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:48:07.275110   83697 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:48:07.306722   83697 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:48:07.306745   83697 crio.go:433] Images already preloaded, skipping extraction
	I1003 18:48:07.306802   83697 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:48:07.331000   83697 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:48:07.331023   83697 cache_images.go:85] Images are preloaded, skipping loading
	I1003 18:48:07.331031   83697 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1003 18:48:07.331136   83697 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422561 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 18:48:07.331212   83697 ssh_runner.go:195] Run: crio config
	I1003 18:48:07.375866   83697 cni.go:84] Creating CNI manager for ""
	I1003 18:48:07.375888   83697 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 18:48:07.375910   83697 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 18:48:07.375937   83697 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-422561 NodeName:ha-422561 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 18:48:07.376106   83697 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-422561"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:48:07.376177   83697 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 18:48:07.383986   83697 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:48:07.384055   83697 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 18:48:07.391187   83697 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1003 18:48:07.403399   83697 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 18:48:07.414754   83697 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1003 18:48:07.426847   83697 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1003 18:48:07.430235   83697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:48:07.439401   83697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:48:07.516381   83697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:48:07.538237   83697 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561 for IP: 192.168.49.2
	I1003 18:48:07.538255   83697 certs.go:195] generating shared ca certs ...
	I1003 18:48:07.538271   83697 certs.go:227] acquiring lock for ca certs: {Name:mk92d1e8e469cb44d9924ff8abf5ecf0a8ce4e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:48:07.538437   83697 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key
	I1003 18:48:07.538512   83697 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key
	I1003 18:48:07.538528   83697 certs.go:257] generating profile certs ...
	I1003 18:48:07.538625   83697 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key
	I1003 18:48:07.538704   83697 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2ce2e456
	I1003 18:48:07.538754   83697 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key
	I1003 18:48:07.538768   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 18:48:07.538784   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 18:48:07.538800   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 18:48:07.538816   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 18:48:07.538835   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 18:48:07.538852   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 18:48:07.538868   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 18:48:07.538885   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 18:48:07.539018   83697 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem (1338 bytes)
	W1003 18:48:07.539063   83697 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212_empty.pem, impossibly tiny 0 bytes
	I1003 18:48:07.539074   83697 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:48:07.539115   83697 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:48:07.539150   83697 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:48:07.539179   83697 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem (1675 bytes)
	I1003 18:48:07.539234   83697 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:48:07.539276   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem -> /usr/share/ca-certificates/12212.pem
	I1003 18:48:07.539296   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /usr/share/ca-certificates/122122.pem
	I1003 18:48:07.539321   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:48:07.540071   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:48:07.557965   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 18:48:07.575458   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:48:07.593468   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:48:07.615468   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1003 18:48:07.632748   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:48:07.648762   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:48:07.664587   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 18:48:07.680650   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem --> /usr/share/ca-certificates/12212.pem (1338 bytes)
	I1003 18:48:07.696584   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /usr/share/ca-certificates/122122.pem (1708 bytes)
	I1003 18:48:07.712414   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:48:07.729163   83697 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:48:07.740601   83697 ssh_runner.go:195] Run: openssl version
	I1003 18:48:07.746326   83697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12212.pem && ln -fs /usr/share/ca-certificates/12212.pem /etc/ssl/certs/12212.pem"
	I1003 18:48:07.754771   83697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12212.pem
	I1003 18:48:07.758126   83697 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:48:07.758166   83697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12212.pem
	I1003 18:48:07.791672   83697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12212.pem /etc/ssl/certs/51391683.0"
	I1003 18:48:07.799482   83697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122122.pem && ln -fs /usr/share/ca-certificates/122122.pem /etc/ssl/certs/122122.pem"
	I1003 18:48:07.807556   83697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122122.pem
	I1003 18:48:07.811134   83697 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:48:07.811185   83697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122122.pem
	I1003 18:48:07.844703   83697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122122.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:48:07.852290   83697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:48:07.859877   83697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:48:07.863389   83697 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:48:07.863436   83697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:48:07.897292   83697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:48:07.905487   83697 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:48:07.909431   83697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 18:48:07.943717   83697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 18:48:07.977826   83697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 18:48:08.011227   83697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 18:48:08.050549   83697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 18:48:08.092515   83697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 18:48:08.127614   83697 kubeadm.go:400] StartCluster: {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:48:08.127701   83697 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:48:08.127742   83697 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:48:08.154681   83697 cri.go:89] found id: ""
	I1003 18:48:08.154738   83697 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:48:08.162929   83697 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1003 18:48:08.162947   83697 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1003 18:48:08.163014   83697 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 18:48:08.169965   83697 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:48:08.170348   83697 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:48:08.170445   83697 kubeconfig.go:62] /home/jenkins/minikube-integration/21625-8669/kubeconfig needs updating (will repair): [kubeconfig missing "ha-422561" cluster setting kubeconfig missing "ha-422561" context setting]
	I1003 18:48:08.170662   83697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/kubeconfig: {Name:mk6b7939515483ba69c1f358a3a21494f4ead7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:48:08.171209   83697 kapi.go:59] client config for ha-422561: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key", CAFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 18:48:08.171603   83697 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1003 18:48:08.171622   83697 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1003 18:48:08.171626   83697 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1003 18:48:08.171630   83697 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1003 18:48:08.171635   83697 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1003 18:48:08.171700   83697 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1003 18:48:08.172024   83697 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 18:48:08.179145   83697 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1003 18:48:08.179168   83697 kubeadm.go:601] duration metric: took 16.215128ms to restartPrimaryControlPlane
	I1003 18:48:08.179177   83697 kubeadm.go:402] duration metric: took 51.569431ms to StartCluster
	I1003 18:48:08.179192   83697 settings.go:142] acquiring lock: {Name:mk6bc950503a8f341b8aacc07a8bc72d5db3a25c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:48:08.179256   83697 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:48:08.179754   83697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/kubeconfig: {Name:mk6b7939515483ba69c1f358a3a21494f4ead7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:48:08.179960   83697 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 18:48:08.180005   83697 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 18:48:08.180077   83697 addons.go:69] Setting storage-provisioner=true in profile "ha-422561"
	I1003 18:48:08.180096   83697 addons.go:238] Setting addon storage-provisioner=true in "ha-422561"
	I1003 18:48:08.180126   83697 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:48:08.180143   83697 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:48:08.180118   83697 addons.go:69] Setting default-storageclass=true in profile "ha-422561"
	I1003 18:48:08.180191   83697 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-422561"
	I1003 18:48:08.180383   83697 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:48:08.180572   83697 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:48:08.183165   83697 out.go:179] * Verifying Kubernetes components...
	I1003 18:48:08.184503   83697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:48:08.199461   83697 kapi.go:59] client config for ha-422561: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key", CAFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 18:48:08.199832   83697 addons.go:238] Setting addon default-storageclass=true in "ha-422561"
	I1003 18:48:08.199880   83697 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:48:08.200383   83697 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:48:08.200811   83697 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 18:48:08.202643   83697 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:48:08.202664   83697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 18:48:08.202713   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:08.226707   83697 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 18:48:08.226733   83697 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 18:48:08.226796   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:08.227638   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:08.244287   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:08.283745   83697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:48:08.296260   83697 node_ready.go:35] waiting up to 6m0s for node "ha-422561" to be "Ready" ...
	I1003 18:48:08.335656   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:48:08.351120   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:08.389710   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:08.389751   83697 retry.go:31] will retry after 328.107449ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:08.404951   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:08.404995   83697 retry.go:31] will retry after 321.741218ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:08.718445   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:48:08.726854   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:08.773648   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:08.773686   83697 retry.go:31] will retry after 472.06094ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:08.777934   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:08.777965   83697 retry.go:31] will retry after 427.725934ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:09.205852   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:48:09.246423   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:09.258516   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:09.258554   83697 retry.go:31] will retry after 827.773787ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:09.299212   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:09.299244   83697 retry.go:31] will retry after 477.48466ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:09.776942   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:09.826781   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:09.826812   83697 retry.go:31] will retry after 1.085146889s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:10.087227   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:10.137943   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:10.137973   83697 retry.go:31] will retry after 739.377919ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:10.297625   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:10.877756   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:48:10.912311   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:10.929140   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:10.929175   83697 retry.go:31] will retry after 1.497643033s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:10.963566   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:10.963603   83697 retry.go:31] will retry after 713.576365ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:11.678080   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:11.729368   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:11.729399   83697 retry.go:31] will retry after 2.048730039s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:12.427099   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:12.477658   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:12.477701   83697 retry.go:31] will retry after 2.498808401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:12.797484   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:13.779038   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:13.830173   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:13.830204   83697 retry.go:31] will retry after 4.102789416s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:14.977444   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:15.028118   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:15.028144   83697 retry.go:31] will retry after 2.619354281s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:15.296814   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:17.296893   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:17.648338   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:17.699440   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:17.699475   83697 retry.go:31] will retry after 4.509399124s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:17.933252   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:17.983755   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:17.983783   83697 retry.go:31] will retry after 5.633518758s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:19.297715   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:21.797697   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:22.209174   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:22.259804   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:22.259835   83697 retry.go:31] will retry after 5.445935062s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:23.618051   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:23.669865   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:23.669892   83697 retry.go:31] will retry after 8.812204221s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:24.297645   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:26.796887   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:27.706519   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:27.757124   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:27.757152   83697 retry.go:31] will retry after 10.217471518s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:29.296865   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:31.797282   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:32.482714   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:32.535080   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:32.535111   83697 retry.go:31] will retry after 6.964681944s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:34.297049   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:36.297155   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:37.974824   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:38.025602   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:38.025636   83697 retry.go:31] will retry after 18.172547929s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:38.297586   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:39.499928   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:39.551482   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:39.551509   83697 retry.go:31] will retry after 10.529315365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:40.297633   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:42.796931   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:44.797268   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:46.797590   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:49.296867   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:50.081207   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:50.133196   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:50.133222   83697 retry.go:31] will retry after 12.42585121s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:51.296943   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:53.297831   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:55.796917   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:56.198392   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:56.249657   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:56.249700   83697 retry.go:31] will retry after 29.529741997s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:57.797326   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:00.297226   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:02.297421   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:49:02.559843   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:49:02.612999   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:49:02.613029   83697 retry.go:31] will retry after 27.551629332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:49:04.797075   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:06.797507   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:09.297080   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:11.297269   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:13.796944   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:15.797079   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:17.797368   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:19.797700   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:21.797785   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:24.296940   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:49:25.779700   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:49:25.831805   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:49:25.831933   83697 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1003 18:49:26.796936   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:28.797330   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:49:30.164992   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:49:30.215742   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:49:30.215772   83697 retry.go:31] will retry after 28.778272146s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:49:30.797426   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:33.296941   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:35.297159   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:37.297417   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:39.297817   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:41.796863   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:44.296913   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:46.796856   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:48.797475   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:50.797629   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:53.296889   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:55.796908   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:57.797151   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:49:58.994596   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:49:59.046263   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:49:59.046378   83697 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1003 18:49:59.048398   83697 out.go:179] * Enabled addons: 
	I1003 18:49:59.049773   83697 addons.go:514] duration metric: took 1m50.869773501s for enable addons: enabled=[]
	W1003 18:50:00.296924   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:02.297548   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:04.797690   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:07.297348   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:09.297437   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:11.797512   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:14.297319   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:16.797104   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:19.296854   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:21.297701   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:23.297802   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:25.297849   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:27.797780   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:30.297741   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:32.797494   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:35.297010   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:37.797828   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:40.297711   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:42.797757   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:45.297560   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:47.297687   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:49.797412   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:51.797571   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:54.297548   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:56.797814   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:59.296806   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:01.297710   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:03.797647   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:05.797831   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:08.296870   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:10.297744   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:12.797784   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:15.297698   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:17.797688   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:19.797840   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:22.296774   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:24.297664   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:26.797683   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:29.297653   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:31.797617   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:34.297512   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:36.297549   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:38.797789   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:41.297808   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:43.797711   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:46.297515   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:48.297601   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:50.797480   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:52.797630   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:55.297518   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:57.297610   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:59.797566   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:01.797845   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:04.297723   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:06.797751   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:09.296900   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:11.297073   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:13.797101   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:16.296892   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:18.297089   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:20.297441   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:22.297830   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:24.797001   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:26.797103   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:28.797309   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:30.797733   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:33.296890   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:35.297023   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:37.297485   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:39.796821   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:41.796908   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:44.297519   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:46.297801   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:48.797226   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:50.797520   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:53.297667   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:55.796948   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:57.797147   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:59.797195   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:01.797398   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:03.797694   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:06.296852   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:08.297011   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:10.297275   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:12.796835   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:14.797025   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:16.797428   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:18.797693   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:21.296967   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:23.297118   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:25.297443   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:27.796863   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:29.796896   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:31.797097   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:33.797406   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:36.296856   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:38.297182   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:40.297561   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:42.796949   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:44.797310   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:46.797517   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:49.296798   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:51.796965   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:53.797416   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:56.296843   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:58.297143   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:54:00.297294   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:54:02.297496   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:54:04.797414   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:54:07.296848   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:54:08.296599   83697 node_ready.go:38] duration metric: took 6m0.000289942s for node "ha-422561" to be "Ready" ...
	I1003 18:54:08.298641   83697 out.go:203] 
	W1003 18:54:08.300195   83697 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1003 18:54:08.300213   83697 out.go:285] * 
	W1003 18:54:08.301827   83697 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:54:08.303083   83697 out.go:203] 
	
	
	==> CRI-O <==
	Oct 03 18:54:03 ha-422561 crio[520]: time="2025-10-03T18:54:03.647082514Z" level=info msg="createCtr: removing container f7a34ef2837124c4149de511b8e4b8763d42ab1cc1b34ad4e960590c9eece03f" id=e2773dc5-e5b4-40f0-85ce-9ba6d287055f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:03 ha-422561 crio[520]: time="2025-10-03T18:54:03.647112111Z" level=info msg="createCtr: deleting container f7a34ef2837124c4149de511b8e4b8763d42ab1cc1b34ad4e960590c9eece03f from storage" id=e2773dc5-e5b4-40f0-85ce-9ba6d287055f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:03 ha-422561 crio[520]: time="2025-10-03T18:54:03.649207319Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-422561_kube-system_2640157afe5e174d7402164688eed7be_0" id=e2773dc5-e5b4-40f0-85ce-9ba6d287055f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.621559573Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=92db6541-0ada-48f2-9f54-cf27017442d0 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.622438768Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=53e7ba88-73d6-4add-a407-22c38e727336 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.623409827Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-422561/kube-controller-manager" id=5b433532-0d82-4118-92d3-c661e2ad4431 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.623606545Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.626737821Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.627138756Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.643546137Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=5b433532-0d82-4118-92d3-c661e2ad4431 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.644841463Z" level=info msg="createCtr: deleting container ID bb66ba1f7d85ec39c3f89147d5fb3033ad189b33e5d9ed90c51d047b702b44da from idIndex" id=5b433532-0d82-4118-92d3-c661e2ad4431 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.644890671Z" level=info msg="createCtr: removing container bb66ba1f7d85ec39c3f89147d5fb3033ad189b33e5d9ed90c51d047b702b44da" id=5b433532-0d82-4118-92d3-c661e2ad4431 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.644930862Z" level=info msg="createCtr: deleting container bb66ba1f7d85ec39c3f89147d5fb3033ad189b33e5d9ed90c51d047b702b44da from storage" id=5b433532-0d82-4118-92d3-c661e2ad4431 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.647097207Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-422561_kube-system_e643a03771f1e72f527532eff2c66a9c_0" id=5b433532-0d82-4118-92d3-c661e2ad4431 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:09 ha-422561 crio[520]: time="2025-10-03T18:54:09.621179385Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=4459c3b4-b2a1-4a7d-a3e0-ae61b105513d name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:54:09 ha-422561 crio[520]: time="2025-10-03T18:54:09.622026747Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=7b48feb8-9d19-4049-a8c9-17077018b490 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:54:09 ha-422561 crio[520]: time="2025-10-03T18:54:09.622792531Z" level=info msg="Creating container: kube-system/etcd-ha-422561/etcd" id=8a5ab53f-6af7-4d0d-9eea-b8fbcd2d862e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:09 ha-422561 crio[520]: time="2025-10-03T18:54:09.623048974Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:54:09 ha-422561 crio[520]: time="2025-10-03T18:54:09.626467402Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:54:09 ha-422561 crio[520]: time="2025-10-03T18:54:09.626902399Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:54:09 ha-422561 crio[520]: time="2025-10-03T18:54:09.640777041Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8a5ab53f-6af7-4d0d-9eea-b8fbcd2d862e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:09 ha-422561 crio[520]: time="2025-10-03T18:54:09.642302866Z" level=info msg="createCtr: deleting container ID 72c008a26077bb623cd91e30f4b47ddb807b831e02622bc9aacd968ee2e14cbf from idIndex" id=8a5ab53f-6af7-4d0d-9eea-b8fbcd2d862e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:09 ha-422561 crio[520]: time="2025-10-03T18:54:09.642334798Z" level=info msg="createCtr: removing container 72c008a26077bb623cd91e30f4b47ddb807b831e02622bc9aacd968ee2e14cbf" id=8a5ab53f-6af7-4d0d-9eea-b8fbcd2d862e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:09 ha-422561 crio[520]: time="2025-10-03T18:54:09.642365568Z" level=info msg="createCtr: deleting container 72c008a26077bb623cd91e30f4b47ddb807b831e02622bc9aacd968ee2e14cbf from storage" id=8a5ab53f-6af7-4d0d-9eea-b8fbcd2d862e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:09 ha-422561 crio[520]: time="2025-10-03T18:54:09.644616558Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-422561_kube-system_6803106e6cb30e1b9b282ce29772fddf_0" id=8a5ab53f-6af7-4d0d-9eea-b8fbcd2d862e name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:54:10.785512    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:54:10.786087    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:54:10.787586    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:54:10.788062    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:54:10.789580    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 3 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001870] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.374530] i8042: Warning: Keylock active
	[  +0.010846] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003424] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000658] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000637] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000691] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.479345] block sda: the capability attribute has been deprecated.
	[  +0.086934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025583] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.992810] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:54:10 up  1:36,  0 user,  load average: 0.02, 0.04, 0.07
	Linux ha-422561 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 03 18:54:03 ha-422561 kubelet[673]: E1003 18:54:03.649581     673 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:54:03 ha-422561 kubelet[673]:         container kube-scheduler start failed in pod kube-scheduler-ha-422561_kube-system(2640157afe5e174d7402164688eed7be): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:54:03 ha-422561 kubelet[673]:  > logger="UnhandledError"
	Oct 03 18:54:03 ha-422561 kubelet[673]: E1003 18:54:03.649608     673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-422561" podUID="2640157afe5e174d7402164688eed7be"
	Oct 03 18:54:03 ha-422561 kubelet[673]: E1003 18:54:03.705698     673 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-422561.186b0fa6982c434d  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-422561,UID:ha-422561,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-422561 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-422561,},FirstTimestamp:2025-10-03 18:48:07.610336077 +0000 UTC m=+0.070153337,LastTimestamp:2025-10-03 18:48:07.610336077 +0000 UTC m=+0.070153337,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-422561,}"
	Oct 03 18:54:05 ha-422561 kubelet[673]: E1003 18:54:05.621162     673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:54:05 ha-422561 kubelet[673]: E1003 18:54:05.647355     673 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:54:05 ha-422561 kubelet[673]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:54:05 ha-422561 kubelet[673]:  > podSandboxID="2b327f08e5f0ad594cbcc01662a574beafe6a0fa01e2f506c269716f808713e3"
	Oct 03 18:54:05 ha-422561 kubelet[673]: E1003 18:54:05.647439     673 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:54:05 ha-422561 kubelet[673]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-422561_kube-system(e643a03771f1e72f527532eff2c66a9c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:54:05 ha-422561 kubelet[673]:  > logger="UnhandledError"
	Oct 03 18:54:05 ha-422561 kubelet[673]: E1003 18:54:05.647466     673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-422561" podUID="e643a03771f1e72f527532eff2c66a9c"
	Oct 03 18:54:07 ha-422561 kubelet[673]: E1003 18:54:07.636709     673 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-422561\" not found"
	Oct 03 18:54:09 ha-422561 kubelet[673]: E1003 18:54:09.620742     673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:54:09 ha-422561 kubelet[673]: E1003 18:54:09.644993     673 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:54:09 ha-422561 kubelet[673]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:54:09 ha-422561 kubelet[673]:  > podSandboxID="dab64913433ecb09fb1cb30b031bad1b6b1a6ed66d7a67cc65799603398c5952"
	Oct 03 18:54:09 ha-422561 kubelet[673]: E1003 18:54:09.645107     673 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:54:09 ha-422561 kubelet[673]:         container etcd start failed in pod etcd-ha-422561_kube-system(6803106e6cb30e1b9b282ce29772fddf): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:54:09 ha-422561 kubelet[673]:  > logger="UnhandledError"
	Oct 03 18:54:09 ha-422561 kubelet[673]: E1003 18:54:09.645137     673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-422561" podUID="6803106e6cb30e1b9b282ce29772fddf"
	Oct 03 18:54:10 ha-422561 kubelet[673]: E1003 18:54:10.261108     673 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-422561?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 03 18:54:10 ha-422561 kubelet[673]: I1003 18:54:10.424386     673 kubelet_node_status.go:75] "Attempting to register node" node="ha-422561"
	Oct 03 18:54:10 ha-422561 kubelet[673]: E1003 18:54:10.424727     673 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-422561"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561: exit status 2 (296.985554ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-422561" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (1.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422561 node add --control-plane --alsologtostderr -v 5: exit status 103 (253.404932ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-422561 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-422561"

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:54:11.223155   88339 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:54:11.223423   88339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:54:11.223433   88339 out.go:374] Setting ErrFile to fd 2...
	I1003 18:54:11.223437   88339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:54:11.224121   88339 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:54:11.224786   88339 mustload.go:65] Loading cluster: ha-422561
	I1003 18:54:11.225172   88339 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:54:11.225505   88339 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:54:11.242841   88339 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:54:11.243108   88339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:54:11.296381   88339 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-03 18:54:11.285275894 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:54:11.296576   88339 api_server.go:166] Checking apiserver status ...
	I1003 18:54:11.296654   88339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:54:11.296717   88339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:54:11.313825   88339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	W1003 18:54:11.416580   88339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:54:11.418449   88339 out.go:179] * The control-plane node ha-422561 apiserver is not running: (state=Stopped)
	I1003 18:54:11.419571   88339 out.go:179]   To start a cluster, run: "minikube start -p ha-422561"

                                                
                                                
** /stderr **
ha_test.go:609: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-linux-amd64 -p ha-422561 node add --control-plane --alsologtostderr -v 5" : exit status 103
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-422561
helpers_test.go:243: (dbg) docker inspect ha-422561:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	        "Created": "2025-10-03T18:31:00.396132938Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 83894,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T18:48:01.584921869Z",
	            "FinishedAt": "2025-10-03T18:48:00.240128679Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hostname",
	        "HostsPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hosts",
	        "LogPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512-json.log",
	        "Name": "/ha-422561",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-422561:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-422561",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	                "LowerDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94-init/diff:/var/lib/docker/overlay2/6a517a7375440eba803d7b83fe1e0821915758396dd4d8556ab64fff322a60c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-422561",
	                "Source": "/var/lib/docker/volumes/ha-422561/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-422561",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-422561",
	                "name.minikube.sigs.k8s.io": "ha-422561",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b7bc183f57948a25d46552eb6c438fe564ed77e2518bcbeb88c2428dc903e44c",
	            "SandboxKey": "/var/run/docker/netns/b7bc183f5794",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32793"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32794"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32797"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32795"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32796"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-422561": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:4a:c7:54:b6:6a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de6aa7ca29f453c0d15cb280abde7ee215f554c89e78e3db8a0f7590468114b5",
	                    "EndpointID": "3c59a4bfdbcc71d01f483fb97819fde7e13586cafec98410913d5f8c234327ac",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-422561",
	                        "eef8fc426b2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561: exit status 2 (293.632195ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node add --alsologtostderr -v 5                                                    │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node stop m02 --alsologtostderr -v 5                                               │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node start m02 --alsologtostderr -v 5                                              │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:41 UTC │                     │
	│ node    │ ha-422561 node list --alsologtostderr -v 5                                                   │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:41 UTC │                     │
	│ stop    │ ha-422561 stop --alsologtostderr -v 5                                                        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:41 UTC │ 03 Oct 25 18:41 UTC │
	│ start   │ ha-422561 start --wait true --alsologtostderr -v 5                                           │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:41 UTC │                     │
	│ node    │ ha-422561 node list --alsologtostderr -v 5                                                   │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:47 UTC │                     │
	│ node    │ ha-422561 node delete m03 --alsologtostderr -v 5                                             │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:47 UTC │                     │
	│ stop    │ ha-422561 stop --alsologtostderr -v 5                                                        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:47 UTC │ 03 Oct 25 18:48 UTC │
	│ start   │ ha-422561 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:48 UTC │                     │
	│ node    │ ha-422561 node add --control-plane --alsologtostderr -v 5                                    │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:54 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:48:01
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:48:01.358006   83697 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:48:01.358289   83697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:48:01.358300   83697 out.go:374] Setting ErrFile to fd 2...
	I1003 18:48:01.358305   83697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:48:01.358536   83697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:48:01.358996   83697 out.go:368] Setting JSON to false
	I1003 18:48:01.359863   83697 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5432,"bootTime":1759511849,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:48:01.359957   83697 start.go:140] virtualization: kvm guest
	I1003 18:48:01.362210   83697 out.go:179] * [ha-422561] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 18:48:01.363666   83697 notify.go:220] Checking for updates...
	I1003 18:48:01.363675   83697 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:48:01.365090   83697 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:48:01.366363   83697 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:48:01.367623   83697 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:48:01.368893   83697 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:48:01.370300   83697 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:48:01.372005   83697 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:48:01.372415   83697 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:48:01.396617   83697 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:48:01.396706   83697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:48:01.448802   83697 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:48:01.439437332 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:48:01.448910   83697 docker.go:318] overlay module found
	I1003 18:48:01.450884   83697 out.go:179] * Using the docker driver based on existing profile
	I1003 18:48:01.452231   83697 start.go:304] selected driver: docker
	I1003 18:48:01.452246   83697 start.go:924] validating driver "docker" against &{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:48:01.452322   83697 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:48:01.452405   83697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:48:01.509159   83697 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:48:01.498948046 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:48:01.509757   83697 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:48:01.509786   83697 cni.go:84] Creating CNI manager for ""
	I1003 18:48:01.509833   83697 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 18:48:01.509876   83697 start.go:348] cluster config:
	{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1003 18:48:01.511871   83697 out.go:179] * Starting "ha-422561" primary control-plane node in "ha-422561" cluster
	I1003 18:48:01.513298   83697 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:48:01.514481   83697 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:48:01.515584   83697 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:48:01.515621   83697 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:48:01.515631   83697 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 18:48:01.515642   83697 cache.go:58] Caching tarball of preloaded images
	I1003 18:48:01.515725   83697 preload.go:233] Found /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1003 18:48:01.515744   83697 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 18:48:01.515874   83697 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:48:01.536348   83697 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 18:48:01.536367   83697 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 18:48:01.536383   83697 cache.go:232] Successfully downloaded all kic artifacts
	I1003 18:48:01.536411   83697 start.go:360] acquireMachinesLock for ha-422561: {Name:mk32fd04a5d9b5f89831583bab7d7527f4d187a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:48:01.536466   83697 start.go:364] duration metric: took 37.424µs to acquireMachinesLock for "ha-422561"
	I1003 18:48:01.536482   83697 start.go:96] Skipping create...Using existing machine configuration
	I1003 18:48:01.536489   83697 fix.go:54] fixHost starting: 
	I1003 18:48:01.536680   83697 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:48:01.553807   83697 fix.go:112] recreateIfNeeded on ha-422561: state=Stopped err=<nil>
	W1003 18:48:01.553839   83697 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 18:48:01.555613   83697 out.go:252] * Restarting existing docker container for "ha-422561" ...
	I1003 18:48:01.555684   83697 cli_runner.go:164] Run: docker start ha-422561
	I1003 18:48:01.796448   83697 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:48:01.815210   83697 kic.go:430] container "ha-422561" state is running.
	I1003 18:48:01.815590   83697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:48:01.834439   83697 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:48:01.834700   83697 machine.go:93] provisionDockerMachine start ...
	I1003 18:48:01.834770   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:01.852545   83697 main.go:141] libmachine: Using SSH client type: native
	I1003 18:48:01.852799   83697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1003 18:48:01.852812   83697 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 18:48:01.853394   83697 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49686->127.0.0.1:32793: read: connection reset by peer
	I1003 18:48:04.996743   83697 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:48:04.996769   83697 ubuntu.go:182] provisioning hostname "ha-422561"
	I1003 18:48:04.996830   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:05.013852   83697 main.go:141] libmachine: Using SSH client type: native
	I1003 18:48:05.014117   83697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1003 18:48:05.014132   83697 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-422561 && echo "ha-422561" | sudo tee /etc/hostname
	I1003 18:48:05.165019   83697 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:48:05.165102   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:05.183718   83697 main.go:141] libmachine: Using SSH client type: native
	I1003 18:48:05.183927   83697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1003 18:48:05.183944   83697 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422561' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422561/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422561' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:48:05.326262   83697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:48:05.326300   83697 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8669/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8669/.minikube}
	I1003 18:48:05.326346   83697 ubuntu.go:190] setting up certificates
	I1003 18:48:05.326359   83697 provision.go:84] configureAuth start
	I1003 18:48:05.326433   83697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:48:05.343930   83697 provision.go:143] copyHostCerts
	I1003 18:48:05.343993   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:48:05.344029   83697 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem, removing ...
	I1003 18:48:05.344046   83697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:48:05.344123   83697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem (1082 bytes)
	I1003 18:48:05.344224   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:48:05.344246   83697 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem, removing ...
	I1003 18:48:05.344254   83697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:48:05.344285   83697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem (1123 bytes)
	I1003 18:48:05.344349   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:48:05.344369   83697 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem, removing ...
	I1003 18:48:05.344376   83697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:48:05.344403   83697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem (1675 bytes)
	I1003 18:48:05.344471   83697 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem org=jenkins.ha-422561 san=[127.0.0.1 192.168.49.2 ha-422561 localhost minikube]
	I1003 18:48:05.548175   83697 provision.go:177] copyRemoteCerts
	I1003 18:48:05.548237   83697 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:48:05.548272   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:05.565560   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:05.665910   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 18:48:05.665989   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 18:48:05.683091   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 18:48:05.683139   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1003 18:48:05.699514   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 18:48:05.699586   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 18:48:05.716017   83697 provision.go:87] duration metric: took 389.640217ms to configureAuth
	I1003 18:48:05.716044   83697 ubuntu.go:206] setting minikube options for container-runtime
	I1003 18:48:05.716221   83697 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:48:05.716337   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:05.735187   83697 main.go:141] libmachine: Using SSH client type: native
	I1003 18:48:05.735436   83697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1003 18:48:05.735459   83697 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 18:48:05.988283   83697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 18:48:05.988310   83697 machine.go:96] duration metric: took 4.153593591s to provisionDockerMachine
	I1003 18:48:05.988321   83697 start.go:293] postStartSetup for "ha-422561" (driver="docker")
	I1003 18:48:05.988333   83697 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:48:05.988396   83697 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:48:05.988435   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:06.005743   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:06.106231   83697 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:48:06.109622   83697 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:48:06.109647   83697 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 18:48:06.109656   83697 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/addons for local assets ...
	I1003 18:48:06.109722   83697 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/files for local assets ...
	I1003 18:48:06.109816   83697 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> 122122.pem in /etc/ssl/certs
	I1003 18:48:06.109829   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /etc/ssl/certs/122122.pem
	I1003 18:48:06.109949   83697 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 18:48:06.117171   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:48:06.133466   83697 start.go:296] duration metric: took 145.133244ms for postStartSetup
	I1003 18:48:06.133546   83697 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:48:06.133640   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:06.151048   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:06.247794   83697 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:48:06.252196   83697 fix.go:56] duration metric: took 4.715699614s for fixHost
	I1003 18:48:06.252229   83697 start.go:83] releasing machines lock for "ha-422561", held for 4.715747117s
	I1003 18:48:06.252292   83697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:48:06.269719   83697 ssh_runner.go:195] Run: cat /version.json
	I1003 18:48:06.269776   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:06.269848   83697 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:48:06.269925   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:06.287309   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:06.288536   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:06.440444   83697 ssh_runner.go:195] Run: systemctl --version
	I1003 18:48:06.446644   83697 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 18:48:06.480099   83697 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 18:48:06.484552   83697 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 18:48:06.484620   83697 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 18:48:06.492151   83697 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1003 18:48:06.492174   83697 start.go:495] detecting cgroup driver to use...
	I1003 18:48:06.492207   83697 detect.go:190] detected "systemd" cgroup driver on host os
	I1003 18:48:06.492242   83697 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 18:48:06.505874   83697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 18:48:06.518096   83697 docker.go:218] disabling cri-docker service (if available) ...
	I1003 18:48:06.518153   83697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 18:48:06.532038   83697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 18:48:06.543572   83697 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 18:48:06.619047   83697 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 18:48:06.695631   83697 docker.go:234] disabling docker service ...
	I1003 18:48:06.695709   83697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 18:48:06.709304   83697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 18:48:06.720766   83697 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 18:48:06.794255   83697 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 18:48:06.872577   83697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 18:48:06.884756   83697 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:48:06.898431   83697 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 18:48:06.898497   83697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.907185   83697 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 18:48:06.907288   83697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.915650   83697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.923921   83697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.932255   83697 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:48:06.939698   83697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.948130   83697 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.955875   83697 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.963958   83697 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:48:06.970620   83697 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:48:06.977236   83697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:48:07.055447   83697 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 18:48:07.158344   83697 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 18:48:07.158401   83697 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 18:48:07.162236   83697 start.go:563] Will wait 60s for crictl version
	I1003 18:48:07.162283   83697 ssh_runner.go:195] Run: which crictl
	I1003 18:48:07.165713   83697 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 18:48:07.189610   83697 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 18:48:07.189696   83697 ssh_runner.go:195] Run: crio --version
	I1003 18:48:07.216037   83697 ssh_runner.go:195] Run: crio --version
	I1003 18:48:07.243602   83697 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 18:48:07.244835   83697 cli_runner.go:164] Run: docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:48:07.261059   83697 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 18:48:07.264966   83697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:48:07.274777   83697 kubeadm.go:883] updating cluster {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 18:48:07.274871   83697 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:48:07.275110   83697 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:48:07.306722   83697 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:48:07.306745   83697 crio.go:433] Images already preloaded, skipping extraction
	I1003 18:48:07.306802   83697 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:48:07.331000   83697 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:48:07.331023   83697 cache_images.go:85] Images are preloaded, skipping loading
	I1003 18:48:07.331031   83697 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1003 18:48:07.331136   83697 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422561 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 18:48:07.331212   83697 ssh_runner.go:195] Run: crio config
	I1003 18:48:07.375866   83697 cni.go:84] Creating CNI manager for ""
	I1003 18:48:07.375888   83697 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 18:48:07.375910   83697 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 18:48:07.375937   83697 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-422561 NodeName:ha-422561 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 18:48:07.376106   83697 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-422561"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:48:07.376177   83697 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 18:48:07.383986   83697 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:48:07.384055   83697 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 18:48:07.391187   83697 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1003 18:48:07.403399   83697 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 18:48:07.414754   83697 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1003 18:48:07.426847   83697 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1003 18:48:07.430235   83697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:48:07.439401   83697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:48:07.516381   83697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:48:07.538237   83697 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561 for IP: 192.168.49.2
	I1003 18:48:07.538255   83697 certs.go:195] generating shared ca certs ...
	I1003 18:48:07.538271   83697 certs.go:227] acquiring lock for ca certs: {Name:mk92d1e8e469cb44d9924ff8abf5ecf0a8ce4e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:48:07.538437   83697 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key
	I1003 18:48:07.538512   83697 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key
	I1003 18:48:07.538528   83697 certs.go:257] generating profile certs ...
	I1003 18:48:07.538625   83697 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key
	I1003 18:48:07.538704   83697 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2ce2e456
	I1003 18:48:07.538754   83697 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key
	I1003 18:48:07.538768   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 18:48:07.538784   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 18:48:07.538800   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 18:48:07.538816   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 18:48:07.538835   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 18:48:07.538852   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 18:48:07.538868   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 18:48:07.538885   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 18:48:07.539018   83697 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem (1338 bytes)
	W1003 18:48:07.539063   83697 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212_empty.pem, impossibly tiny 0 bytes
	I1003 18:48:07.539074   83697 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:48:07.539115   83697 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:48:07.539150   83697 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:48:07.539179   83697 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem (1675 bytes)
	I1003 18:48:07.539234   83697 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:48:07.539276   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem -> /usr/share/ca-certificates/12212.pem
	I1003 18:48:07.539296   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /usr/share/ca-certificates/122122.pem
	I1003 18:48:07.539321   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:48:07.540071   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:48:07.557965   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 18:48:07.575458   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:48:07.593468   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:48:07.615468   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1003 18:48:07.632748   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:48:07.648762   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:48:07.664587   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 18:48:07.680650   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem --> /usr/share/ca-certificates/12212.pem (1338 bytes)
	I1003 18:48:07.696584   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /usr/share/ca-certificates/122122.pem (1708 bytes)
	I1003 18:48:07.712414   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:48:07.729163   83697 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:48:07.740601   83697 ssh_runner.go:195] Run: openssl version
	I1003 18:48:07.746326   83697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12212.pem && ln -fs /usr/share/ca-certificates/12212.pem /etc/ssl/certs/12212.pem"
	I1003 18:48:07.754771   83697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12212.pem
	I1003 18:48:07.758126   83697 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:48:07.758166   83697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12212.pem
	I1003 18:48:07.791672   83697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12212.pem /etc/ssl/certs/51391683.0"
	I1003 18:48:07.799482   83697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122122.pem && ln -fs /usr/share/ca-certificates/122122.pem /etc/ssl/certs/122122.pem"
	I1003 18:48:07.807556   83697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122122.pem
	I1003 18:48:07.811134   83697 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:48:07.811185   83697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122122.pem
	I1003 18:48:07.844703   83697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122122.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:48:07.852290   83697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:48:07.859877   83697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:48:07.863389   83697 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:48:07.863436   83697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:48:07.897292   83697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:48:07.905487   83697 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:48:07.909431   83697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 18:48:07.943717   83697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 18:48:07.977826   83697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 18:48:08.011227   83697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 18:48:08.050549   83697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 18:48:08.092515   83697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 18:48:08.127614   83697 kubeadm.go:400] StartCluster: {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:48:08.127701   83697 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:48:08.127742   83697 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:48:08.154681   83697 cri.go:89] found id: ""
	I1003 18:48:08.154738   83697 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:48:08.162929   83697 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1003 18:48:08.162947   83697 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1003 18:48:08.163014   83697 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 18:48:08.169965   83697 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:48:08.170348   83697 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:48:08.170445   83697 kubeconfig.go:62] /home/jenkins/minikube-integration/21625-8669/kubeconfig needs updating (will repair): [kubeconfig missing "ha-422561" cluster setting kubeconfig missing "ha-422561" context setting]
	I1003 18:48:08.170662   83697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/kubeconfig: {Name:mk6b7939515483ba69c1f358a3a21494f4ead7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:48:08.171209   83697 kapi.go:59] client config for ha-422561: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key", CAFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 18:48:08.171603   83697 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1003 18:48:08.171622   83697 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1003 18:48:08.171626   83697 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1003 18:48:08.171630   83697 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1003 18:48:08.171635   83697 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1003 18:48:08.171700   83697 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1003 18:48:08.172024   83697 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 18:48:08.179145   83697 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1003 18:48:08.179168   83697 kubeadm.go:601] duration metric: took 16.215128ms to restartPrimaryControlPlane
	I1003 18:48:08.179177   83697 kubeadm.go:402] duration metric: took 51.569431ms to StartCluster
	I1003 18:48:08.179192   83697 settings.go:142] acquiring lock: {Name:mk6bc950503a8f341b8aacc07a8bc72d5db3a25c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:48:08.179256   83697 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:48:08.179754   83697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/kubeconfig: {Name:mk6b7939515483ba69c1f358a3a21494f4ead7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:48:08.179960   83697 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 18:48:08.180005   83697 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 18:48:08.180077   83697 addons.go:69] Setting storage-provisioner=true in profile "ha-422561"
	I1003 18:48:08.180096   83697 addons.go:238] Setting addon storage-provisioner=true in "ha-422561"
	I1003 18:48:08.180126   83697 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:48:08.180143   83697 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:48:08.180118   83697 addons.go:69] Setting default-storageclass=true in profile "ha-422561"
	I1003 18:48:08.180191   83697 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-422561"
	I1003 18:48:08.180383   83697 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:48:08.180572   83697 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:48:08.183165   83697 out.go:179] * Verifying Kubernetes components...
	I1003 18:48:08.184503   83697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:48:08.199461   83697 kapi.go:59] client config for ha-422561: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key", CAFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 18:48:08.199832   83697 addons.go:238] Setting addon default-storageclass=true in "ha-422561"
	I1003 18:48:08.199880   83697 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:48:08.200383   83697 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:48:08.200811   83697 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 18:48:08.202643   83697 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:48:08.202664   83697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 18:48:08.202713   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:08.226707   83697 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 18:48:08.226733   83697 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 18:48:08.226796   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:08.227638   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:08.244287   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:08.283745   83697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:48:08.296260   83697 node_ready.go:35] waiting up to 6m0s for node "ha-422561" to be "Ready" ...
	I1003 18:48:08.335656   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:48:08.351120   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:08.389710   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:08.389751   83697 retry.go:31] will retry after 328.107449ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:08.404951   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:08.404995   83697 retry.go:31] will retry after 321.741218ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:08.718445   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:48:08.726854   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:08.773648   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:08.773686   83697 retry.go:31] will retry after 472.06094ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:08.777934   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:08.777965   83697 retry.go:31] will retry after 427.725934ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:09.205852   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:48:09.246423   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:09.258516   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:09.258554   83697 retry.go:31] will retry after 827.773787ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:09.299212   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:09.299244   83697 retry.go:31] will retry after 477.48466ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:09.776942   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:09.826781   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:09.826812   83697 retry.go:31] will retry after 1.085146889s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:10.087227   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:10.137943   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:10.137973   83697 retry.go:31] will retry after 739.377919ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:10.297625   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:10.877756   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:48:10.912311   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:10.929140   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:10.929175   83697 retry.go:31] will retry after 1.497643033s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:10.963566   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:10.963603   83697 retry.go:31] will retry after 713.576365ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:11.678080   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:11.729368   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:11.729399   83697 retry.go:31] will retry after 2.048730039s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:12.427099   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:12.477658   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:12.477701   83697 retry.go:31] will retry after 2.498808401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:12.797484   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:13.779038   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:13.830173   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:13.830204   83697 retry.go:31] will retry after 4.102789416s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:14.977444   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:15.028118   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:15.028144   83697 retry.go:31] will retry after 2.619354281s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:15.296814   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:17.296893   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:17.648338   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:17.699440   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:17.699475   83697 retry.go:31] will retry after 4.509399124s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:17.933252   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:17.983755   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:17.983783   83697 retry.go:31] will retry after 5.633518758s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:19.297715   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:21.797697   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:22.209174   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:22.259804   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:22.259835   83697 retry.go:31] will retry after 5.445935062s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:23.618051   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:23.669865   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:23.669892   83697 retry.go:31] will retry after 8.812204221s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:24.297645   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:26.796887   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:27.706519   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:27.757124   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:27.757152   83697 retry.go:31] will retry after 10.217471518s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:29.296865   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:31.797282   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:32.482714   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:32.535080   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:32.535111   83697 retry.go:31] will retry after 6.964681944s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:34.297049   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:36.297155   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:37.974824   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:38.025602   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:38.025636   83697 retry.go:31] will retry after 18.172547929s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:38.297586   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:39.499928   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:39.551482   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:39.551509   83697 retry.go:31] will retry after 10.529315365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:40.297633   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:42.796931   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:44.797268   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:46.797590   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:49.296867   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:50.081207   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:50.133196   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:50.133222   83697 retry.go:31] will retry after 12.42585121s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:51.296943   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:53.297831   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:55.796917   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:56.198392   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:56.249657   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:56.249700   83697 retry.go:31] will retry after 29.529741997s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:57.797326   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:00.297226   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:02.297421   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:49:02.559843   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:49:02.612999   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:49:02.613029   83697 retry.go:31] will retry after 27.551629332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:49:04.797075   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:06.797507   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:09.297080   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:11.297269   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:13.796944   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:15.797079   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:17.797368   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:19.797700   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:21.797785   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:24.296940   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:49:25.779700   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:49:25.831805   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:49:25.831933   83697 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1003 18:49:26.796936   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:28.797330   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:49:30.164992   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:49:30.215742   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:49:30.215772   83697 retry.go:31] will retry after 28.778272146s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:49:30.797426   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:33.296941   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:35.297159   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:37.297417   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:39.297817   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:41.796863   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:44.296913   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:46.796856   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:48.797475   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:50.797629   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:53.296889   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:55.796908   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:57.797151   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:49:58.994596   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:49:59.046263   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:49:59.046378   83697 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1003 18:49:59.048398   83697 out.go:179] * Enabled addons: 
	I1003 18:49:59.049773   83697 addons.go:514] duration metric: took 1m50.869773501s for enable addons: enabled=[]
	W1003 18:50:00.296924   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:02.297548   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:04.797690   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:07.297348   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:09.297437   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:11.797512   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:14.297319   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:16.797104   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:19.296854   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:21.297701   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:23.297802   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:25.297849   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:27.797780   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:30.297741   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:32.797494   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:35.297010   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:37.797828   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:40.297711   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:42.797757   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:45.297560   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:47.297687   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:49.797412   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:51.797571   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:54.297548   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:56.797814   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:59.296806   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:01.297710   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:03.797647   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:05.797831   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:08.296870   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:10.297744   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:12.797784   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:15.297698   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:17.797688   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:19.797840   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:22.296774   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:24.297664   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:26.797683   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:29.297653   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:31.797617   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:34.297512   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:36.297549   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:38.797789   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:41.297808   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:43.797711   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:46.297515   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:48.297601   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:50.797480   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:52.797630   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:55.297518   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:57.297610   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:59.797566   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:01.797845   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:04.297723   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:06.797751   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:09.296900   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:11.297073   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:13.797101   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:16.296892   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:18.297089   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:20.297441   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:22.297830   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:24.797001   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:26.797103   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:28.797309   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:30.797733   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:33.296890   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:35.297023   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:37.297485   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:39.796821   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:41.796908   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:44.297519   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:46.297801   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:48.797226   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:50.797520   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:53.297667   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:55.796948   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:57.797147   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:59.797195   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:01.797398   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:03.797694   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:06.296852   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:08.297011   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:10.297275   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:12.796835   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:14.797025   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:16.797428   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:18.797693   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:21.296967   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:23.297118   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:25.297443   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:27.796863   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:29.796896   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:31.797097   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:33.797406   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:36.296856   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:38.297182   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:40.297561   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:42.796949   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:44.797310   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:46.797517   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:49.296798   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:51.796965   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:53.797416   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:56.296843   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:58.297143   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:54:00.297294   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:54:02.297496   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:54:04.797414   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:54:07.296848   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:54:08.296599   83697 node_ready.go:38] duration metric: took 6m0.000289942s for node "ha-422561" to be "Ready" ...
	I1003 18:54:08.298641   83697 out.go:203] 
	W1003 18:54:08.300195   83697 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1003 18:54:08.300213   83697 out.go:285] * 
	W1003 18:54:08.301827   83697 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:54:08.303083   83697 out.go:203] 
	
	
	==> CRI-O <==
	Oct 03 18:54:03 ha-422561 crio[520]: time="2025-10-03T18:54:03.647082514Z" level=info msg="createCtr: removing container f7a34ef2837124c4149de511b8e4b8763d42ab1cc1b34ad4e960590c9eece03f" id=e2773dc5-e5b4-40f0-85ce-9ba6d287055f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:03 ha-422561 crio[520]: time="2025-10-03T18:54:03.647112111Z" level=info msg="createCtr: deleting container f7a34ef2837124c4149de511b8e4b8763d42ab1cc1b34ad4e960590c9eece03f from storage" id=e2773dc5-e5b4-40f0-85ce-9ba6d287055f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:03 ha-422561 crio[520]: time="2025-10-03T18:54:03.649207319Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-422561_kube-system_2640157afe5e174d7402164688eed7be_0" id=e2773dc5-e5b4-40f0-85ce-9ba6d287055f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.621559573Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=92db6541-0ada-48f2-9f54-cf27017442d0 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.622438768Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=53e7ba88-73d6-4add-a407-22c38e727336 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.623409827Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-422561/kube-controller-manager" id=5b433532-0d82-4118-92d3-c661e2ad4431 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.623606545Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.626737821Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.627138756Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.643546137Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=5b433532-0d82-4118-92d3-c661e2ad4431 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.644841463Z" level=info msg="createCtr: deleting container ID bb66ba1f7d85ec39c3f89147d5fb3033ad189b33e5d9ed90c51d047b702b44da from idIndex" id=5b433532-0d82-4118-92d3-c661e2ad4431 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.644890671Z" level=info msg="createCtr: removing container bb66ba1f7d85ec39c3f89147d5fb3033ad189b33e5d9ed90c51d047b702b44da" id=5b433532-0d82-4118-92d3-c661e2ad4431 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.644930862Z" level=info msg="createCtr: deleting container bb66ba1f7d85ec39c3f89147d5fb3033ad189b33e5d9ed90c51d047b702b44da from storage" id=5b433532-0d82-4118-92d3-c661e2ad4431 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.647097207Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-422561_kube-system_e643a03771f1e72f527532eff2c66a9c_0" id=5b433532-0d82-4118-92d3-c661e2ad4431 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:09 ha-422561 crio[520]: time="2025-10-03T18:54:09.621179385Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=4459c3b4-b2a1-4a7d-a3e0-ae61b105513d name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:54:09 ha-422561 crio[520]: time="2025-10-03T18:54:09.622026747Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=7b48feb8-9d19-4049-a8c9-17077018b490 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:54:09 ha-422561 crio[520]: time="2025-10-03T18:54:09.622792531Z" level=info msg="Creating container: kube-system/etcd-ha-422561/etcd" id=8a5ab53f-6af7-4d0d-9eea-b8fbcd2d862e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:09 ha-422561 crio[520]: time="2025-10-03T18:54:09.623048974Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:54:09 ha-422561 crio[520]: time="2025-10-03T18:54:09.626467402Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:54:09 ha-422561 crio[520]: time="2025-10-03T18:54:09.626902399Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:54:09 ha-422561 crio[520]: time="2025-10-03T18:54:09.640777041Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8a5ab53f-6af7-4d0d-9eea-b8fbcd2d862e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:09 ha-422561 crio[520]: time="2025-10-03T18:54:09.642302866Z" level=info msg="createCtr: deleting container ID 72c008a26077bb623cd91e30f4b47ddb807b831e02622bc9aacd968ee2e14cbf from idIndex" id=8a5ab53f-6af7-4d0d-9eea-b8fbcd2d862e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:09 ha-422561 crio[520]: time="2025-10-03T18:54:09.642334798Z" level=info msg="createCtr: removing container 72c008a26077bb623cd91e30f4b47ddb807b831e02622bc9aacd968ee2e14cbf" id=8a5ab53f-6af7-4d0d-9eea-b8fbcd2d862e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:09 ha-422561 crio[520]: time="2025-10-03T18:54:09.642365568Z" level=info msg="createCtr: deleting container 72c008a26077bb623cd91e30f4b47ddb807b831e02622bc9aacd968ee2e14cbf from storage" id=8a5ab53f-6af7-4d0d-9eea-b8fbcd2d862e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:09 ha-422561 crio[520]: time="2025-10-03T18:54:09.644616558Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-422561_kube-system_6803106e6cb30e1b9b282ce29772fddf_0" id=8a5ab53f-6af7-4d0d-9eea-b8fbcd2d862e name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:54:12.276524    2355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:54:12.277032    2355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:54:12.278580    2355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:54:12.279019    2355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:54:12.280480    2355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 3 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001870] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.374530] i8042: Warning: Keylock active
	[  +0.010846] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003424] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000658] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000637] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000691] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.479345] block sda: the capability attribute has been deprecated.
	[  +0.086934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025583] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.992810] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:54:12 up  1:36,  0 user,  load average: 0.02, 0.04, 0.07
	Linux ha-422561 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 03 18:54:03 ha-422561 kubelet[673]: E1003 18:54:03.649581     673 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:54:03 ha-422561 kubelet[673]:         container kube-scheduler start failed in pod kube-scheduler-ha-422561_kube-system(2640157afe5e174d7402164688eed7be): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:54:03 ha-422561 kubelet[673]:  > logger="UnhandledError"
	Oct 03 18:54:03 ha-422561 kubelet[673]: E1003 18:54:03.649608     673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-422561" podUID="2640157afe5e174d7402164688eed7be"
	Oct 03 18:54:03 ha-422561 kubelet[673]: E1003 18:54:03.705698     673 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-422561.186b0fa6982c434d  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-422561,UID:ha-422561,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-422561 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-422561,},FirstTimestamp:2025-10-03 18:48:07.610336077 +0000 UTC m=+0.070153337,LastTimestamp:2025-10-03 18:48:07.610336077 +0000 UTC m=+0.070153337,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-422561,}"
	Oct 03 18:54:05 ha-422561 kubelet[673]: E1003 18:54:05.621162     673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:54:05 ha-422561 kubelet[673]: E1003 18:54:05.647355     673 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:54:05 ha-422561 kubelet[673]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:54:05 ha-422561 kubelet[673]:  > podSandboxID="2b327f08e5f0ad594cbcc01662a574beafe6a0fa01e2f506c269716f808713e3"
	Oct 03 18:54:05 ha-422561 kubelet[673]: E1003 18:54:05.647439     673 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:54:05 ha-422561 kubelet[673]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-422561_kube-system(e643a03771f1e72f527532eff2c66a9c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:54:05 ha-422561 kubelet[673]:  > logger="UnhandledError"
	Oct 03 18:54:05 ha-422561 kubelet[673]: E1003 18:54:05.647466     673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-422561" podUID="e643a03771f1e72f527532eff2c66a9c"
	Oct 03 18:54:07 ha-422561 kubelet[673]: E1003 18:54:07.636709     673 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-422561\" not found"
	Oct 03 18:54:09 ha-422561 kubelet[673]: E1003 18:54:09.620742     673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:54:09 ha-422561 kubelet[673]: E1003 18:54:09.644993     673 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:54:09 ha-422561 kubelet[673]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:54:09 ha-422561 kubelet[673]:  > podSandboxID="dab64913433ecb09fb1cb30b031bad1b6b1a6ed66d7a67cc65799603398c5952"
	Oct 03 18:54:09 ha-422561 kubelet[673]: E1003 18:54:09.645107     673 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:54:09 ha-422561 kubelet[673]:         container etcd start failed in pod etcd-ha-422561_kube-system(6803106e6cb30e1b9b282ce29772fddf): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:54:09 ha-422561 kubelet[673]:  > logger="UnhandledError"
	Oct 03 18:54:09 ha-422561 kubelet[673]: E1003 18:54:09.645137     673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-422561" podUID="6803106e6cb30e1b9b282ce29772fddf"
	Oct 03 18:54:10 ha-422561 kubelet[673]: E1003 18:54:10.261108     673 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-422561?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 03 18:54:10 ha-422561 kubelet[673]: I1003 18:54:10.424386     673 kubelet_node_status.go:75] "Attempting to register node" node="ha-422561"
	Oct 03 18:54:10 ha-422561 kubelet[673]: E1003 18:54:10.424727     673 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-422561"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561: exit status 2 (289.307549ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-422561" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (1.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:305: expected profile "ha-422561" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-422561\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-422561\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nf
sshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-422561\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonIm
ages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:309: expected profile "ha-422561" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-422561\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-422561\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-422561\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\
"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --o
utput json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-422561
helpers_test.go:243: (dbg) docker inspect ha-422561:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	        "Created": "2025-10-03T18:31:00.396132938Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 83894,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T18:48:01.584921869Z",
	            "FinishedAt": "2025-10-03T18:48:00.240128679Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hostname",
	        "HostsPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/hosts",
	        "LogPath": "/var/lib/docker/containers/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512/eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512-json.log",
	        "Name": "/ha-422561",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-422561:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-422561",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "eef8fc426b2b481ce0c1767b251f72de2f6e8aa6418bc3cf91f6ced4e0408512",
	                "LowerDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94-init/diff:/var/lib/docker/overlay2/6a517a7375440eba803d7b83fe1e0821915758396dd4d8556ab64fff322a60c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f915b3c97b080649584d37a48839fd9052640011db5d7d756e41bf45116e9a94/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-422561",
	                "Source": "/var/lib/docker/volumes/ha-422561/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-422561",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-422561",
	                "name.minikube.sigs.k8s.io": "ha-422561",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b7bc183f57948a25d46552eb6c438fe564ed77e2518bcbeb88c2428dc903e44c",
	            "SandboxKey": "/var/run/docker/netns/b7bc183f5794",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32793"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32794"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32797"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32795"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32796"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-422561": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:4a:c7:54:b6:6a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de6aa7ca29f453c0d15cb280abde7ee215f554c89e78e3db8a0f7590468114b5",
	                    "EndpointID": "3c59a4bfdbcc71d01f483fb97819fde7e13586cafec98410913d5f8c234327ac",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-422561",
	                        "eef8fc426b2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-422561 -n ha-422561: exit status 2 (292.762861ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-422561 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:39 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ kubectl │ ha-422561 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node add --alsologtostderr -v 5                                                    │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node stop m02 --alsologtostderr -v 5                                               │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:40 UTC │                     │
	│ node    │ ha-422561 node start m02 --alsologtostderr -v 5                                              │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:41 UTC │                     │
	│ node    │ ha-422561 node list --alsologtostderr -v 5                                                   │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:41 UTC │                     │
	│ stop    │ ha-422561 stop --alsologtostderr -v 5                                                        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:41 UTC │ 03 Oct 25 18:41 UTC │
	│ start   │ ha-422561 start --wait true --alsologtostderr -v 5                                           │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:41 UTC │                     │
	│ node    │ ha-422561 node list --alsologtostderr -v 5                                                   │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:47 UTC │                     │
	│ node    │ ha-422561 node delete m03 --alsologtostderr -v 5                                             │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:47 UTC │                     │
	│ stop    │ ha-422561 stop --alsologtostderr -v 5                                                        │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:47 UTC │ 03 Oct 25 18:48 UTC │
	│ start   │ ha-422561 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:48 UTC │                     │
	│ node    │ ha-422561 node add --control-plane --alsologtostderr -v 5                                    │ ha-422561 │ jenkins │ v1.37.0 │ 03 Oct 25 18:54 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:48:01
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:48:01.358006   83697 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:48:01.358289   83697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:48:01.358300   83697 out.go:374] Setting ErrFile to fd 2...
	I1003 18:48:01.358305   83697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:48:01.358536   83697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:48:01.358996   83697 out.go:368] Setting JSON to false
	I1003 18:48:01.359863   83697 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5432,"bootTime":1759511849,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:48:01.359957   83697 start.go:140] virtualization: kvm guest
	I1003 18:48:01.362210   83697 out.go:179] * [ha-422561] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 18:48:01.363666   83697 notify.go:220] Checking for updates...
	I1003 18:48:01.363675   83697 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:48:01.365090   83697 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:48:01.366363   83697 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:48:01.367623   83697 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:48:01.368893   83697 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:48:01.370300   83697 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:48:01.372005   83697 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:48:01.372415   83697 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:48:01.396617   83697 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:48:01.396706   83697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:48:01.448802   83697 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:48:01.439437332 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:48:01.448910   83697 docker.go:318] overlay module found
	I1003 18:48:01.450884   83697 out.go:179] * Using the docker driver based on existing profile
	I1003 18:48:01.452231   83697 start.go:304] selected driver: docker
	I1003 18:48:01.452246   83697 start.go:924] validating driver "docker" against &{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:48:01.452322   83697 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:48:01.452405   83697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:48:01.509159   83697 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 18:48:01.498948046 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:48:01.509757   83697 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:48:01.509786   83697 cni.go:84] Creating CNI manager for ""
	I1003 18:48:01.509833   83697 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 18:48:01.509876   83697 start.go:348] cluster config:
	{Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1003 18:48:01.511871   83697 out.go:179] * Starting "ha-422561" primary control-plane node in "ha-422561" cluster
	I1003 18:48:01.513298   83697 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:48:01.514481   83697 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:48:01.515584   83697 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:48:01.515621   83697 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:48:01.515631   83697 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 18:48:01.515642   83697 cache.go:58] Caching tarball of preloaded images
	I1003 18:48:01.515725   83697 preload.go:233] Found /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1003 18:48:01.515744   83697 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 18:48:01.515874   83697 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:48:01.536348   83697 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 18:48:01.536367   83697 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 18:48:01.536383   83697 cache.go:232] Successfully downloaded all kic artifacts
	I1003 18:48:01.536411   83697 start.go:360] acquireMachinesLock for ha-422561: {Name:mk32fd04a5d9b5f89831583bab7d7527f4d187a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:48:01.536466   83697 start.go:364] duration metric: took 37.424µs to acquireMachinesLock for "ha-422561"
	I1003 18:48:01.536482   83697 start.go:96] Skipping create...Using existing machine configuration
	I1003 18:48:01.536489   83697 fix.go:54] fixHost starting: 
	I1003 18:48:01.536680   83697 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:48:01.553807   83697 fix.go:112] recreateIfNeeded on ha-422561: state=Stopped err=<nil>
	W1003 18:48:01.553839   83697 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 18:48:01.555613   83697 out.go:252] * Restarting existing docker container for "ha-422561" ...
	I1003 18:48:01.555684   83697 cli_runner.go:164] Run: docker start ha-422561
	I1003 18:48:01.796448   83697 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:48:01.815210   83697 kic.go:430] container "ha-422561" state is running.
	I1003 18:48:01.815590   83697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:48:01.834439   83697 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/config.json ...
	I1003 18:48:01.834700   83697 machine.go:93] provisionDockerMachine start ...
	I1003 18:48:01.834770   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:01.852545   83697 main.go:141] libmachine: Using SSH client type: native
	I1003 18:48:01.852799   83697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1003 18:48:01.852812   83697 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 18:48:01.853394   83697 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49686->127.0.0.1:32793: read: connection reset by peer
	I1003 18:48:04.996743   83697 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:48:04.996769   83697 ubuntu.go:182] provisioning hostname "ha-422561"
	I1003 18:48:04.996830   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:05.013852   83697 main.go:141] libmachine: Using SSH client type: native
	I1003 18:48:05.014117   83697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1003 18:48:05.014132   83697 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-422561 && echo "ha-422561" | sudo tee /etc/hostname
	I1003 18:48:05.165019   83697 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-422561
	
	I1003 18:48:05.165102   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:05.183718   83697 main.go:141] libmachine: Using SSH client type: native
	I1003 18:48:05.183927   83697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1003 18:48:05.183944   83697 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422561' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422561/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422561' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:48:05.326262   83697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:48:05.326300   83697 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8669/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8669/.minikube}
	I1003 18:48:05.326346   83697 ubuntu.go:190] setting up certificates
	I1003 18:48:05.326359   83697 provision.go:84] configureAuth start
	I1003 18:48:05.326433   83697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:48:05.343930   83697 provision.go:143] copyHostCerts
	I1003 18:48:05.343993   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:48:05.344029   83697 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem, removing ...
	I1003 18:48:05.344046   83697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 18:48:05.344123   83697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem (1082 bytes)
	I1003 18:48:05.344224   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:48:05.344246   83697 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem, removing ...
	I1003 18:48:05.344254   83697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 18:48:05.344285   83697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem (1123 bytes)
	I1003 18:48:05.344349   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:48:05.344369   83697 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem, removing ...
	I1003 18:48:05.344376   83697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 18:48:05.344403   83697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem (1675 bytes)
	I1003 18:48:05.344471   83697 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem org=jenkins.ha-422561 san=[127.0.0.1 192.168.49.2 ha-422561 localhost minikube]
	I1003 18:48:05.548175   83697 provision.go:177] copyRemoteCerts
	I1003 18:48:05.548237   83697 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:48:05.548272   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:05.565560   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:05.665910   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 18:48:05.665989   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 18:48:05.683091   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 18:48:05.683139   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1003 18:48:05.699514   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 18:48:05.699586   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 18:48:05.716017   83697 provision.go:87] duration metric: took 389.640217ms to configureAuth
	I1003 18:48:05.716044   83697 ubuntu.go:206] setting minikube options for container-runtime
	I1003 18:48:05.716221   83697 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:48:05.716337   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:05.735187   83697 main.go:141] libmachine: Using SSH client type: native
	I1003 18:48:05.735436   83697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1003 18:48:05.735459   83697 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 18:48:05.988283   83697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 18:48:05.988310   83697 machine.go:96] duration metric: took 4.153593591s to provisionDockerMachine
	I1003 18:48:05.988321   83697 start.go:293] postStartSetup for "ha-422561" (driver="docker")
	I1003 18:48:05.988333   83697 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:48:05.988396   83697 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:48:05.988435   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:06.005743   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:06.106231   83697 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:48:06.109622   83697 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:48:06.109647   83697 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 18:48:06.109656   83697 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/addons for local assets ...
	I1003 18:48:06.109722   83697 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/files for local assets ...
	I1003 18:48:06.109816   83697 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> 122122.pem in /etc/ssl/certs
	I1003 18:48:06.109829   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /etc/ssl/certs/122122.pem
	I1003 18:48:06.109949   83697 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 18:48:06.117171   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:48:06.133466   83697 start.go:296] duration metric: took 145.133244ms for postStartSetup
	I1003 18:48:06.133546   83697 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:48:06.133640   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:06.151048   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:06.247794   83697 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:48:06.252196   83697 fix.go:56] duration metric: took 4.715699614s for fixHost
	I1003 18:48:06.252229   83697 start.go:83] releasing machines lock for "ha-422561", held for 4.715747117s
	I1003 18:48:06.252292   83697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422561
	I1003 18:48:06.269719   83697 ssh_runner.go:195] Run: cat /version.json
	I1003 18:48:06.269776   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:06.269848   83697 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:48:06.269925   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:06.287309   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:06.288536   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:06.440444   83697 ssh_runner.go:195] Run: systemctl --version
	I1003 18:48:06.446644   83697 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 18:48:06.480099   83697 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 18:48:06.484552   83697 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 18:48:06.484620   83697 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 18:48:06.492151   83697 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1003 18:48:06.492174   83697 start.go:495] detecting cgroup driver to use...
	I1003 18:48:06.492207   83697 detect.go:190] detected "systemd" cgroup driver on host os
	I1003 18:48:06.492242   83697 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 18:48:06.505874   83697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 18:48:06.518096   83697 docker.go:218] disabling cri-docker service (if available) ...
	I1003 18:48:06.518153   83697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 18:48:06.532038   83697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 18:48:06.543572   83697 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 18:48:06.619047   83697 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 18:48:06.695631   83697 docker.go:234] disabling docker service ...
	I1003 18:48:06.695709   83697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 18:48:06.709304   83697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 18:48:06.720766   83697 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 18:48:06.794255   83697 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 18:48:06.872577   83697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 18:48:06.884756   83697 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:48:06.898431   83697 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 18:48:06.898497   83697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.907185   83697 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 18:48:06.907288   83697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.915650   83697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.923921   83697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.932255   83697 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:48:06.939698   83697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.948130   83697 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.955875   83697 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:48:06.963958   83697 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:48:06.970620   83697 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:48:06.977236   83697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:48:07.055447   83697 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 18:48:07.158344   83697 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 18:48:07.158401   83697 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 18:48:07.162236   83697 start.go:563] Will wait 60s for crictl version
	I1003 18:48:07.162283   83697 ssh_runner.go:195] Run: which crictl
	I1003 18:48:07.165713   83697 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 18:48:07.189610   83697 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 18:48:07.189696   83697 ssh_runner.go:195] Run: crio --version
	I1003 18:48:07.216037   83697 ssh_runner.go:195] Run: crio --version
	I1003 18:48:07.243602   83697 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 18:48:07.244835   83697 cli_runner.go:164] Run: docker network inspect ha-422561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:48:07.261059   83697 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 18:48:07.264966   83697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:48:07.274777   83697 kubeadm.go:883] updating cluster {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 18:48:07.274871   83697 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:48:07.275110   83697 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:48:07.306722   83697 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:48:07.306745   83697 crio.go:433] Images already preloaded, skipping extraction
	I1003 18:48:07.306802   83697 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:48:07.331000   83697 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:48:07.331023   83697 cache_images.go:85] Images are preloaded, skipping loading
	I1003 18:48:07.331031   83697 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1003 18:48:07.331136   83697 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422561 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 18:48:07.331212   83697 ssh_runner.go:195] Run: crio config
	I1003 18:48:07.375866   83697 cni.go:84] Creating CNI manager for ""
	I1003 18:48:07.375888   83697 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 18:48:07.375910   83697 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 18:48:07.375937   83697 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-422561 NodeName:ha-422561 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 18:48:07.376106   83697 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-422561"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:48:07.376177   83697 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 18:48:07.383986   83697 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:48:07.384055   83697 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 18:48:07.391187   83697 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1003 18:48:07.403399   83697 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 18:48:07.414754   83697 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1003 18:48:07.426847   83697 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1003 18:48:07.430235   83697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:48:07.439401   83697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:48:07.516381   83697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:48:07.538237   83697 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561 for IP: 192.168.49.2
	I1003 18:48:07.538255   83697 certs.go:195] generating shared ca certs ...
	I1003 18:48:07.538271   83697 certs.go:227] acquiring lock for ca certs: {Name:mk92d1e8e469cb44d9924ff8abf5ecf0a8ce4e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:48:07.538437   83697 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key
	I1003 18:48:07.538512   83697 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key
	I1003 18:48:07.538528   83697 certs.go:257] generating profile certs ...
	I1003 18:48:07.538625   83697 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key
	I1003 18:48:07.538704   83697 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key.2ce2e456
	I1003 18:48:07.538754   83697 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key
	I1003 18:48:07.538768   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 18:48:07.538784   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 18:48:07.538800   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 18:48:07.538816   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 18:48:07.538835   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 18:48:07.538852   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 18:48:07.538868   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 18:48:07.538885   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 18:48:07.539018   83697 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem (1338 bytes)
	W1003 18:48:07.539063   83697 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212_empty.pem, impossibly tiny 0 bytes
	I1003 18:48:07.539074   83697 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:48:07.539115   83697 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:48:07.539150   83697 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:48:07.539179   83697 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem (1675 bytes)
	I1003 18:48:07.539234   83697 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem (1708 bytes)
	I1003 18:48:07.539276   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem -> /usr/share/ca-certificates/12212.pem
	I1003 18:48:07.539296   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> /usr/share/ca-certificates/122122.pem
	I1003 18:48:07.539321   83697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:48:07.540071   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:48:07.557965   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 18:48:07.575458   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:48:07.593468   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:48:07.615468   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1003 18:48:07.632748   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:48:07.648762   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:48:07.664587   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 18:48:07.680650   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem --> /usr/share/ca-certificates/12212.pem (1338 bytes)
	I1003 18:48:07.696584   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /usr/share/ca-certificates/122122.pem (1708 bytes)
	I1003 18:48:07.712414   83697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:48:07.729163   83697 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:48:07.740601   83697 ssh_runner.go:195] Run: openssl version
	I1003 18:48:07.746326   83697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12212.pem && ln -fs /usr/share/ca-certificates/12212.pem /etc/ssl/certs/12212.pem"
	I1003 18:48:07.754771   83697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12212.pem
	I1003 18:48:07.758126   83697 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 18:48:07.758166   83697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12212.pem
	I1003 18:48:07.791672   83697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12212.pem /etc/ssl/certs/51391683.0"
	I1003 18:48:07.799482   83697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122122.pem && ln -fs /usr/share/ca-certificates/122122.pem /etc/ssl/certs/122122.pem"
	I1003 18:48:07.807556   83697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122122.pem
	I1003 18:48:07.811134   83697 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 18:48:07.811185   83697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122122.pem
	I1003 18:48:07.844703   83697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122122.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:48:07.852290   83697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:48:07.859877   83697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:48:07.863389   83697 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:48:07.863436   83697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:48:07.897292   83697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:48:07.905487   83697 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:48:07.909431   83697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 18:48:07.943717   83697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 18:48:07.977826   83697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 18:48:08.011227   83697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 18:48:08.050549   83697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 18:48:08.092515   83697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 18:48:08.127614   83697 kubeadm.go:400] StartCluster: {Name:ha-422561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-422561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:48:08.127701   83697 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:48:08.127742   83697 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:48:08.154681   83697 cri.go:89] found id: ""
	I1003 18:48:08.154738   83697 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:48:08.162929   83697 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1003 18:48:08.162947   83697 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1003 18:48:08.163014   83697 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 18:48:08.169965   83697 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:48:08.170348   83697 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-422561" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:48:08.170445   83697 kubeconfig.go:62] /home/jenkins/minikube-integration/21625-8669/kubeconfig needs updating (will repair): [kubeconfig missing "ha-422561" cluster setting kubeconfig missing "ha-422561" context setting]
	I1003 18:48:08.170662   83697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/kubeconfig: {Name:mk6b7939515483ba69c1f358a3a21494f4ead7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:48:08.171209   83697 kapi.go:59] client config for ha-422561: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key", CAFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 18:48:08.171603   83697 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1003 18:48:08.171622   83697 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1003 18:48:08.171626   83697 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1003 18:48:08.171630   83697 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1003 18:48:08.171635   83697 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1003 18:48:08.171700   83697 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1003 18:48:08.172024   83697 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 18:48:08.179145   83697 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1003 18:48:08.179168   83697 kubeadm.go:601] duration metric: took 16.215128ms to restartPrimaryControlPlane
	I1003 18:48:08.179177   83697 kubeadm.go:402] duration metric: took 51.569431ms to StartCluster
	I1003 18:48:08.179192   83697 settings.go:142] acquiring lock: {Name:mk6bc950503a8f341b8aacc07a8bc72d5db3a25c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:48:08.179256   83697 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:48:08.179754   83697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/kubeconfig: {Name:mk6b7939515483ba69c1f358a3a21494f4ead7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:48:08.179960   83697 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 18:48:08.180005   83697 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 18:48:08.180077   83697 addons.go:69] Setting storage-provisioner=true in profile "ha-422561"
	I1003 18:48:08.180096   83697 addons.go:238] Setting addon storage-provisioner=true in "ha-422561"
	I1003 18:48:08.180126   83697 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:48:08.180143   83697 config.go:182] Loaded profile config "ha-422561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:48:08.180118   83697 addons.go:69] Setting default-storageclass=true in profile "ha-422561"
	I1003 18:48:08.180191   83697 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-422561"
	I1003 18:48:08.180383   83697 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:48:08.180572   83697 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:48:08.183165   83697 out.go:179] * Verifying Kubernetes components...
	I1003 18:48:08.184503   83697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:48:08.199461   83697 kapi.go:59] client config for ha-422561: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/profiles/ha-422561/client.key", CAFile:"/home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 18:48:08.199832   83697 addons.go:238] Setting addon default-storageclass=true in "ha-422561"
	I1003 18:48:08.199880   83697 host.go:66] Checking if "ha-422561" exists ...
	I1003 18:48:08.200383   83697 cli_runner.go:164] Run: docker container inspect ha-422561 --format={{.State.Status}}
	I1003 18:48:08.200811   83697 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 18:48:08.202643   83697 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:48:08.202664   83697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 18:48:08.202713   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:08.226707   83697 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 18:48:08.226733   83697 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 18:48:08.226796   83697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422561
	I1003 18:48:08.227638   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:08.244287   83697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/ha-422561/id_rsa Username:docker}
	I1003 18:48:08.283745   83697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:48:08.296260   83697 node_ready.go:35] waiting up to 6m0s for node "ha-422561" to be "Ready" ...
	I1003 18:48:08.335656   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:48:08.351120   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:08.389710   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:08.389751   83697 retry.go:31] will retry after 328.107449ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:08.404951   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:08.404995   83697 retry.go:31] will retry after 321.741218ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:08.718445   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:48:08.726854   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:08.773648   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:08.773686   83697 retry.go:31] will retry after 472.06094ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:08.777934   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:08.777965   83697 retry.go:31] will retry after 427.725934ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:09.205852   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:48:09.246423   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:09.258516   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:09.258554   83697 retry.go:31] will retry after 827.773787ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:09.299212   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:09.299244   83697 retry.go:31] will retry after 477.48466ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:09.776942   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:09.826781   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:09.826812   83697 retry.go:31] will retry after 1.085146889s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:10.087227   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:10.137943   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:10.137973   83697 retry.go:31] will retry after 739.377919ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:10.297625   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:10.877756   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:48:10.912311   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:10.929140   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:10.929175   83697 retry.go:31] will retry after 1.497643033s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:10.963566   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:10.963603   83697 retry.go:31] will retry after 713.576365ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:11.678080   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:11.729368   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:11.729399   83697 retry.go:31] will retry after 2.048730039s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:12.427099   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:12.477658   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:12.477701   83697 retry.go:31] will retry after 2.498808401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:12.797484   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:13.779038   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:13.830173   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:13.830204   83697 retry.go:31] will retry after 4.102789416s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:14.977444   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:15.028118   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:15.028144   83697 retry.go:31] will retry after 2.619354281s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:15.296814   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:17.296893   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:17.648338   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:17.699440   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:17.699475   83697 retry.go:31] will retry after 4.509399124s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:17.933252   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:17.983755   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:17.983783   83697 retry.go:31] will retry after 5.633518758s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:19.297715   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:21.797697   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:22.209174   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:22.259804   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:22.259835   83697 retry.go:31] will retry after 5.445935062s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:23.618051   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:23.669865   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:23.669892   83697 retry.go:31] will retry after 8.812204221s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:24.297645   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:26.796887   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:27.706519   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:27.757124   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:27.757152   83697 retry.go:31] will retry after 10.217471518s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:29.296865   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:31.797282   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:32.482714   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:32.535080   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:32.535111   83697 retry.go:31] will retry after 6.964681944s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:34.297049   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:36.297155   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:37.974824   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:38.025602   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:38.025636   83697 retry.go:31] will retry after 18.172547929s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:38.297586   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:39.499928   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:39.551482   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:39.551509   83697 retry.go:31] will retry after 10.529315365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:40.297633   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:42.796931   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:44.797268   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:46.797590   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:49.296867   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:50.081207   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:48:50.133196   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:50.133222   83697 retry.go:31] will retry after 12.42585121s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:51.296943   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:53.297831   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:48:55.796917   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:48:56.198392   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:48:56.249657   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:48:56.249700   83697 retry.go:31] will retry after 29.529741997s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:48:57.797326   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:00.297226   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:02.297421   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:49:02.559843   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:49:02.612999   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:49:02.613029   83697 retry.go:31] will retry after 27.551629332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:49:04.797075   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:06.797507   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:09.297080   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:11.297269   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:13.796944   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:15.797079   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:17.797368   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:19.797700   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:21.797785   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:24.296940   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:49:25.779700   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1003 18:49:25.831805   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:49:25.831933   83697 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1003 18:49:26.796936   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:28.797330   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:49:30.164992   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:49:30.215742   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:49:30.215772   83697 retry.go:31] will retry after 28.778272146s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:49:30.797426   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:33.296941   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:35.297159   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:37.297417   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:39.297817   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:41.796863   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:44.296913   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:46.796856   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:48.797475   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:50.797629   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:53.296889   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:55.796908   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:49:57.797151   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:49:58.994596   83697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1003 18:49:59.046263   83697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:49:59.046378   83697 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1003 18:49:59.048398   83697 out.go:179] * Enabled addons: 
	I1003 18:49:59.049773   83697 addons.go:514] duration metric: took 1m50.869773501s for enable addons: enabled=[]
	W1003 18:50:00.296924   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:02.297548   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:04.797690   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:07.297348   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:09.297437   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:11.797512   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:14.297319   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:16.797104   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:19.296854   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:21.297701   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:23.297802   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:25.297849   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:27.797780   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:30.297741   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:32.797494   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:35.297010   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:37.797828   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:40.297711   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:42.797757   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:45.297560   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:47.297687   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:49.797412   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:51.797571   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:54.297548   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:56.797814   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:50:59.296806   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:01.297710   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:03.797647   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:05.797831   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:08.296870   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:10.297744   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:12.797784   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:15.297698   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:17.797688   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:19.797840   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:22.296774   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:24.297664   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:26.797683   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:29.297653   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:31.797617   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:34.297512   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:36.297549   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:38.797789   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:41.297808   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:43.797711   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:46.297515   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:48.297601   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:50.797480   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:52.797630   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:55.297518   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:57.297610   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:51:59.797566   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:01.797845   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:04.297723   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:06.797751   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:09.296900   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:11.297073   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:13.797101   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:16.296892   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:18.297089   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:20.297441   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:22.297830   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:24.797001   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:26.797103   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:28.797309   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:30.797733   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:33.296890   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:35.297023   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:37.297485   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:39.796821   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:41.796908   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:44.297519   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:46.297801   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:48.797226   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:50.797520   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:53.297667   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:55.796948   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:57.797147   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:52:59.797195   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:01.797398   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:03.797694   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:06.296852   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:08.297011   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:10.297275   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:12.796835   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:14.797025   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:16.797428   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:18.797693   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:21.296967   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:23.297118   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:25.297443   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:27.796863   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:29.796896   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:31.797097   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:33.797406   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:36.296856   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:38.297182   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:40.297561   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:42.796949   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:44.797310   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:46.797517   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:49.296798   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:51.796965   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:53.797416   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:56.296843   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:53:58.297143   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:54:00.297294   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:54:02.297496   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:54:04.797414   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	W1003 18:54:07.296848   83697 node_ready.go:55] error getting node "ha-422561" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422561": dial tcp 192.168.49.2:8443: connect: connection refused
	I1003 18:54:08.296599   83697 node_ready.go:38] duration metric: took 6m0.000289942s for node "ha-422561" to be "Ready" ...
	I1003 18:54:08.298641   83697 out.go:203] 
	W1003 18:54:08.300195   83697 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1003 18:54:08.300213   83697 out.go:285] * 
	W1003 18:54:08.301827   83697 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:54:08.303083   83697 out.go:203] 
	
	
	==> CRI-O <==
	Oct 03 18:54:03 ha-422561 crio[520]: time="2025-10-03T18:54:03.647082514Z" level=info msg="createCtr: removing container f7a34ef2837124c4149de511b8e4b8763d42ab1cc1b34ad4e960590c9eece03f" id=e2773dc5-e5b4-40f0-85ce-9ba6d287055f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:03 ha-422561 crio[520]: time="2025-10-03T18:54:03.647112111Z" level=info msg="createCtr: deleting container f7a34ef2837124c4149de511b8e4b8763d42ab1cc1b34ad4e960590c9eece03f from storage" id=e2773dc5-e5b4-40f0-85ce-9ba6d287055f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:03 ha-422561 crio[520]: time="2025-10-03T18:54:03.649207319Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-422561_kube-system_2640157afe5e174d7402164688eed7be_0" id=e2773dc5-e5b4-40f0-85ce-9ba6d287055f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.621559573Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=92db6541-0ada-48f2-9f54-cf27017442d0 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.622438768Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=53e7ba88-73d6-4add-a407-22c38e727336 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.623409827Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-422561/kube-controller-manager" id=5b433532-0d82-4118-92d3-c661e2ad4431 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.623606545Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.626737821Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.627138756Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.643546137Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=5b433532-0d82-4118-92d3-c661e2ad4431 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.644841463Z" level=info msg="createCtr: deleting container ID bb66ba1f7d85ec39c3f89147d5fb3033ad189b33e5d9ed90c51d047b702b44da from idIndex" id=5b433532-0d82-4118-92d3-c661e2ad4431 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.644890671Z" level=info msg="createCtr: removing container bb66ba1f7d85ec39c3f89147d5fb3033ad189b33e5d9ed90c51d047b702b44da" id=5b433532-0d82-4118-92d3-c661e2ad4431 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.644930862Z" level=info msg="createCtr: deleting container bb66ba1f7d85ec39c3f89147d5fb3033ad189b33e5d9ed90c51d047b702b44da from storage" id=5b433532-0d82-4118-92d3-c661e2ad4431 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:05 ha-422561 crio[520]: time="2025-10-03T18:54:05.647097207Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-422561_kube-system_e643a03771f1e72f527532eff2c66a9c_0" id=5b433532-0d82-4118-92d3-c661e2ad4431 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:09 ha-422561 crio[520]: time="2025-10-03T18:54:09.621179385Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=4459c3b4-b2a1-4a7d-a3e0-ae61b105513d name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:54:09 ha-422561 crio[520]: time="2025-10-03T18:54:09.622026747Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=7b48feb8-9d19-4049-a8c9-17077018b490 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:54:09 ha-422561 crio[520]: time="2025-10-03T18:54:09.622792531Z" level=info msg="Creating container: kube-system/etcd-ha-422561/etcd" id=8a5ab53f-6af7-4d0d-9eea-b8fbcd2d862e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:09 ha-422561 crio[520]: time="2025-10-03T18:54:09.623048974Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:54:09 ha-422561 crio[520]: time="2025-10-03T18:54:09.626467402Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:54:09 ha-422561 crio[520]: time="2025-10-03T18:54:09.626902399Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:54:09 ha-422561 crio[520]: time="2025-10-03T18:54:09.640777041Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8a5ab53f-6af7-4d0d-9eea-b8fbcd2d862e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:09 ha-422561 crio[520]: time="2025-10-03T18:54:09.642302866Z" level=info msg="createCtr: deleting container ID 72c008a26077bb623cd91e30f4b47ddb807b831e02622bc9aacd968ee2e14cbf from idIndex" id=8a5ab53f-6af7-4d0d-9eea-b8fbcd2d862e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:09 ha-422561 crio[520]: time="2025-10-03T18:54:09.642334798Z" level=info msg="createCtr: removing container 72c008a26077bb623cd91e30f4b47ddb807b831e02622bc9aacd968ee2e14cbf" id=8a5ab53f-6af7-4d0d-9eea-b8fbcd2d862e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:09 ha-422561 crio[520]: time="2025-10-03T18:54:09.642365568Z" level=info msg="createCtr: deleting container 72c008a26077bb623cd91e30f4b47ddb807b831e02622bc9aacd968ee2e14cbf from storage" id=8a5ab53f-6af7-4d0d-9eea-b8fbcd2d862e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:54:09 ha-422561 crio[520]: time="2025-10-03T18:54:09.644616558Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-422561_kube-system_6803106e6cb30e1b9b282ce29772fddf_0" id=8a5ab53f-6af7-4d0d-9eea-b8fbcd2d862e name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 18:54:13.823035    2525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:54:13.823555    2525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:54:13.825189    2525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:54:13.825714    2525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 18:54:13.827357    2525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 3 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001870] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.374530] i8042: Warning: Keylock active
	[  +0.010846] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003424] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000658] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000637] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000691] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.479345] block sda: the capability attribute has been deprecated.
	[  +0.086934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025583] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.992810] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:54:13 up  1:36,  0 user,  load average: 0.02, 0.04, 0.07
	Linux ha-422561 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 03 18:54:03 ha-422561 kubelet[673]:         container kube-scheduler start failed in pod kube-scheduler-ha-422561_kube-system(2640157afe5e174d7402164688eed7be): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:54:03 ha-422561 kubelet[673]:  > logger="UnhandledError"
	Oct 03 18:54:03 ha-422561 kubelet[673]: E1003 18:54:03.649608     673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-422561" podUID="2640157afe5e174d7402164688eed7be"
	Oct 03 18:54:03 ha-422561 kubelet[673]: E1003 18:54:03.705698     673 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-422561.186b0fa6982c434d  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-422561,UID:ha-422561,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-422561 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-422561,},FirstTimestamp:2025-10-03 18:48:07.610336077 +0000 UTC m=+0.070153337,LastTimestamp:2025-10-03 18:48:07.610336077 +0000 UTC m=+0.070153337,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-422561,}"
	Oct 03 18:54:05 ha-422561 kubelet[673]: E1003 18:54:05.621162     673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:54:05 ha-422561 kubelet[673]: E1003 18:54:05.647355     673 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:54:05 ha-422561 kubelet[673]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:54:05 ha-422561 kubelet[673]:  > podSandboxID="2b327f08e5f0ad594cbcc01662a574beafe6a0fa01e2f506c269716f808713e3"
	Oct 03 18:54:05 ha-422561 kubelet[673]: E1003 18:54:05.647439     673 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:54:05 ha-422561 kubelet[673]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-422561_kube-system(e643a03771f1e72f527532eff2c66a9c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:54:05 ha-422561 kubelet[673]:  > logger="UnhandledError"
	Oct 03 18:54:05 ha-422561 kubelet[673]: E1003 18:54:05.647466     673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-422561" podUID="e643a03771f1e72f527532eff2c66a9c"
	Oct 03 18:54:07 ha-422561 kubelet[673]: E1003 18:54:07.636709     673 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-422561\" not found"
	Oct 03 18:54:09 ha-422561 kubelet[673]: E1003 18:54:09.620742     673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-422561\" not found" node="ha-422561"
	Oct 03 18:54:09 ha-422561 kubelet[673]: E1003 18:54:09.644993     673 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 18:54:09 ha-422561 kubelet[673]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:54:09 ha-422561 kubelet[673]:  > podSandboxID="dab64913433ecb09fb1cb30b031bad1b6b1a6ed66d7a67cc65799603398c5952"
	Oct 03 18:54:09 ha-422561 kubelet[673]: E1003 18:54:09.645107     673 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 18:54:09 ha-422561 kubelet[673]:         container etcd start failed in pod etcd-ha-422561_kube-system(6803106e6cb30e1b9b282ce29772fddf): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 18:54:09 ha-422561 kubelet[673]:  > logger="UnhandledError"
	Oct 03 18:54:09 ha-422561 kubelet[673]: E1003 18:54:09.645137     673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-422561" podUID="6803106e6cb30e1b9b282ce29772fddf"
	Oct 03 18:54:10 ha-422561 kubelet[673]: E1003 18:54:10.261108     673 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-422561?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 03 18:54:10 ha-422561 kubelet[673]: I1003 18:54:10.424386     673 kubelet_node_status.go:75] "Attempting to register node" node="ha-422561"
	Oct 03 18:54:10 ha-422561 kubelet[673]: E1003 18:54:10.424727     673 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-422561"
	Oct 03 18:54:13 ha-422561 kubelet[673]: E1003 18:54:13.706993     673 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-422561.186b0fa6982c434d  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-422561,UID:ha-422561,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-422561 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-422561,},FirstTimestamp:2025-10-03 18:48:07.610336077 +0000 UTC m=+0.070153337,LastTimestamp:2025-10-03 18:48:07.610336077 +0000 UTC m=+0.070153337,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-422561,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-422561 -n ha-422561: exit status 2 (297.367151ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-422561" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.55s)

                                                
                                    
x
+
TestJSONOutput/start/Command (496.13s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-553665 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1003 18:56:51.838781   12212 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:01:51.839179   12212 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-553665 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: exit status 80 (8m16.124537689s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"54592361-8dd1-473b-b7a8-288a1517dd0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-553665] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"49593437-5db7-4f0c-870b-40140300f901","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21625"}}
	{"specversion":"1.0","id":"bde5d531-0b07-4b89-88ab-ea5f678edf9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c20090c0-0311-4c21-9db9-05117d1d1759","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig"}}
	{"specversion":"1.0","id":"33e72055-824a-4833-bd5b-ac7f14865c67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube"}}
	{"specversion":"1.0","id":"36ec1d04-8a49-47d4-b64d-976fb7dd9f4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"cfdb453b-c14f-491a-adf8-a831a09eaf09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"df1be0b6-0e4e-4f62-9f3c-4a0d7a41a458","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a3b7e44f-e903-4b8b-9f10-c22c68005d2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"ee5a8ad0-f28b-4731-9f6b-22384abf8e93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-553665\" primary control-plane node in \"json-output-553665\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"9b465501-dee4-4db5-8c3b-967af6bc4a5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1759382731-21643 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"71003182-11fb-4cae-8a36-8a0f1714f81d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"6d35b80b-fd66-404c-97d4-817ce8568b60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"11","message":"Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...","name":"Preparing Kubernetes","totalsteps":"19"}}
	{"specversion":"1.0","id":"8deec515-57a5-42cf-92ed-5bb1f3ce312a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"12","message":"Generating certificates and keys ...","name":"Generating certificates","totalsteps":"19"}}
	{"specversion":"1.0","id":"73a456cf-d317-40ab-969e-b905e99b1b30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"13","message":"Booting up control plane ...","name":"Booting control plane","totalsteps":"19"}}
	{"specversion":"1.0","id":"a0f19f5c-d8b9-4e1e-9ef2-cbd6d4fce83e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Pri
nting the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\
n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[certs] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [json-output-553665 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [json-output-553665 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writi
ng \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the ku
belet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.001682479s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.000338011s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000442308s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000584703s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using
your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check faile
d at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"}}
	{"specversion":"1.0","id":"3e01c609-1fb3-4a9d-9deb-3e32c35b5862","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"12","message":"Generating certificates and keys ...","name":"Generating certificates","totalsteps":"19"}}
	{"specversion":"1.0","id":"16e6721f-410c-495e-a85e-902395049977","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"13","message":"Booting up control plane ...","name":"Booting control plane","totalsteps":"19"}}
	{"specversion":"1.0","id":"a655f2a8-8464-478c-96c9-b081c8862543","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the outpu
t from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using
existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Using existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[
etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/health
z. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 501.488429ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.000154532s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000236936s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000391069s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v p
ause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:102
57/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"}}
	{"specversion":"1.0","id":"7725daeb-5d19-4c52-875a-d94ff5f35cff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system v
erification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/va
r/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Using existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing
\"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy ku
belet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 501.488429ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.000154532s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000236936s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000391069s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/c
rio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager
check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher","name":"GUEST_START","url":""}}
	{"specversion":"1.0","id":"b45a37e7-4866-4b6b-af65-26c223a2455a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 start -p json-output-553665 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio": exit status 80
--- FAIL: TestJSONOutput/start/Command (496.13s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
json_output_test.go:114: step 12 has already been assigned to another step:
Generating certificates and keys ...
Cannot use for:
Generating certificates and keys ...
[Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 54592361-8dd1-473b-b7a8-288a1517dd0d
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-553665] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 49593437-5db7-4f0c-870b-40140300f901
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=21625"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: bde5d531-0b07-4b89-88ab-ea5f678edf9f
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: c20090c0-0311-4c21-9db9-05117d1d1759
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 33e72055-824a-4833-bd5b-ac7f14865c67
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 36ec1d04-8a49-47d4-b64d-976fb7dd9f4b
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-linux-amd64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: cfdb453b-c14f-491a-adf8-a831a09eaf09
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: df1be0b6-0e4e-4f62-9f3c-4a0d7a41a458
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the docker driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: a3b7e44f-e903-4b8b-9f10-c22c68005d2c
datacontenttype: application/json
Data,
{
"message": "Using Docker driver with root privileges"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: ee5a8ad0-f28b-4731-9f6b-22384abf8e93
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting \"json-output-553665\" primary control-plane node in \"json-output-553665\" cluster",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 9b465501-dee4-4db5-8c3b-967af6bc4a5e
datacontenttype: application/json
Data,
{
"currentstep": "5",
"message": "Pulling base image v0.0.48-1759382731-21643 ...",
"name": "Pulling Base Image",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 71003182-11fb-4cae-8a36-8a0f1714f81d
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=3072MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 6d35b80b-fd66-404c-97d4-817ce8568b60
datacontenttype: application/json
Data,
{
"currentstep": "11",
"message": "Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...",
"name": "Preparing Kubernetes",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 8deec515-57a5-42cf-92ed-5bb1f3ce312a
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 73a456cf-d317-40ab-969e-b905e99b1b30
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: a0f19f5c-d8b9-4e1e-9ef2-cbd6d4fce83e
datacontenttype: application/json
Data,
{
"message": "initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGR
OUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[c
erts] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [json-output-553665 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [json-output-553665 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing
\"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kub
elet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.001682479s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.000338011s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000442308s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000584703s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/cr
io.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:
10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 3e01c609-1fb3-4a9d-9deb-3e32c35b5862
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 16e6721f-410c-495e-a85e-902395049977
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: a655f2a8-8464-478c-96c9-b081c8862543
datacontenttype: application/json
Data,
{
"message": "Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[
0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] U
sing existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating stati
c Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 501.488429ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[
control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.000154532s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000236936s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000391069s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WA
RNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 7725daeb-5d19-4c52-875a-d94ff5f35cff
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "80",
"issues": "",
"message": "failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m
: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Usi
ng existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static
Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 501.488429ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[co
ntrol-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.000154532s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000236936s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000391069s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARN
ING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher",
"name": "GUEST_START",
"url": ""
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: b45a37e7-4866-4b6b-af65-26c223a2455a
datacontenttype: application/json
Data,
{
"message": "╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│                                                                                           │\n╰────────────────────────────────────────
───────────────────────────────────────────────────╯"
}
]
--- FAIL: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
json_output_test.go:144: current step is not in increasing order: [Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 54592361-8dd1-473b-b7a8-288a1517dd0d
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-553665] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 49593437-5db7-4f0c-870b-40140300f901
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=21625"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: bde5d531-0b07-4b89-88ab-ea5f678edf9f
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: c20090c0-0311-4c21-9db9-05117d1d1759
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 33e72055-824a-4833-bd5b-ac7f14865c67
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 36ec1d04-8a49-47d4-b64d-976fb7dd9f4b
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-linux-amd64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: cfdb453b-c14f-491a-adf8-a831a09eaf09
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: df1be0b6-0e4e-4f62-9f3c-4a0d7a41a458
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the docker driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: a3b7e44f-e903-4b8b-9f10-c22c68005d2c
datacontenttype: application/json
Data,
{
"message": "Using Docker driver with root privileges"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: ee5a8ad0-f28b-4731-9f6b-22384abf8e93
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting \"json-output-553665\" primary control-plane node in \"json-output-553665\" cluster",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 9b465501-dee4-4db5-8c3b-967af6bc4a5e
datacontenttype: application/json
Data,
{
"currentstep": "5",
"message": "Pulling base image v0.0.48-1759382731-21643 ...",
"name": "Pulling Base Image",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 71003182-11fb-4cae-8a36-8a0f1714f81d
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=3072MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 6d35b80b-fd66-404c-97d4-817ce8568b60
datacontenttype: application/json
Data,
{
"currentstep": "11",
"message": "Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...",
"name": "Preparing Kubernetes",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 8deec515-57a5-42cf-92ed-5bb1f3ce312a
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 73a456cf-d317-40ab-969e-b905e99b1b30
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: a0f19f5c-d8b9-4e1e-9ef2-cbd6d4fce83e
datacontenttype: application/json
Data,
{
"message": "initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGR
OUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[c
erts] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [json-output-553665 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [json-output-553665 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing
\"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kub
elet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.001682479s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.000338011s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000442308s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000584703s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/cr
io.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:
10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 3e01c609-1fb3-4a9d-9deb-3e32c35b5862
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 16e6721f-410c-495e-a85e-902395049977
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: a655f2a8-8464-478c-96c9-b081c8862543
datacontenttype: application/json
Data,
{
"message": "Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[
0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] U
sing existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating stati
c Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 501.488429ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[
control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.000154532s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000236936s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000391069s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WA
RNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 7725daeb-5d19-4c52-875a-d94ff5f35cff
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "80",
"issues": "",
"message": "failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m
: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Usi
ng existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static
Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 501.488429ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[co
ntrol-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.000154532s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000236936s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000391069s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARN
ING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher",
"name": "GUEST_START",
"url": ""
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: b45a37e7-4866-4b6b-af65-26c223a2455a
datacontenttype: application/json
Data,
{
"message": "╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│                                                                                           │\n╰────────────────────────────────────────
───────────────────────────────────────────────────╯"
}
]
--- FAIL: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestMinikubeProfile (500.66s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-128130 --driver=docker  --container-runtime=crio
E1003 19:04:54.916131   12212 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:06:51.830208   12212 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:11:51.838537   12212 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p first-128130 --driver=docker  --container-runtime=crio: exit status 80 (8m17.261259168s)

                                                
                                                
-- stdout --
	* [first-128130] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21625
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "first-128130" primary control-plane node in "first-128130" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [first-128130 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [first-128130 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.88782ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000215615s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000301356s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000719315s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.206893ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000183802s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000227224s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000450328s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.206893ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000183802s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000227224s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000450328s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-linux-amd64 start -p first-128130 --driver=docker  --container-runtime=crio": exit status 80
panic.go:636: *** TestMinikubeProfile FAILED at 2025-10-03 19:13:05.450214995 +0000 UTC m=+5435.407395501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMinikubeProfile]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMinikubeProfile]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect second-131490
helpers_test.go:239: (dbg) Non-zero exit: docker inspect second-131490: exit status 1 (27.926273ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: second-131490

                                                
                                                
** /stderr **
helpers_test.go:241: failed to get docker inspect: exit status 1
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p second-131490 -n second-131490
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p second-131490 -n second-131490: exit status 85 (63.685353ms)

                                                
                                                
-- stdout --
	* Profile "second-131490" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-131490"

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 85 (may be ok)
helpers_test.go:249: "second-131490" host is not running, skipping log retrieval (state="* Profile \"second-131490\" not found. Run \"minikube profile list\" to view all profiles.")
helpers_test.go:175: Cleaning up "second-131490" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-131490
panic.go:636: *** TestMinikubeProfile FAILED at 2025-10-03 19:13:05.688259172 +0000 UTC m=+5435.645439686
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMinikubeProfile]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMinikubeProfile]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect first-128130
helpers_test.go:243: (dbg) docker inspect first-128130:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "61954a50395ae7e7be53f387722a1476a77d52f7c75aef8c7420a8b6b3e3cdab",
	        "Created": "2025-10-03T19:04:53.24843874Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 116809,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T19:04:53.279741972Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/61954a50395ae7e7be53f387722a1476a77d52f7c75aef8c7420a8b6b3e3cdab/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/61954a50395ae7e7be53f387722a1476a77d52f7c75aef8c7420a8b6b3e3cdab/hostname",
	        "HostsPath": "/var/lib/docker/containers/61954a50395ae7e7be53f387722a1476a77d52f7c75aef8c7420a8b6b3e3cdab/hosts",
	        "LogPath": "/var/lib/docker/containers/61954a50395ae7e7be53f387722a1476a77d52f7c75aef8c7420a8b6b3e3cdab/61954a50395ae7e7be53f387722a1476a77d52f7c75aef8c7420a8b6b3e3cdab-json.log",
	        "Name": "/first-128130",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "first-128130:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "first-128130",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 8388608000,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 16777216000,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "61954a50395ae7e7be53f387722a1476a77d52f7c75aef8c7420a8b6b3e3cdab",
	                "LowerDir": "/var/lib/docker/overlay2/7640381349b9bc95e78ea32e77cf9ef8882de5fdc6b885992e9c9dd7f05dc04a-init/diff:/var/lib/docker/overlay2/6a517a7375440eba803d7b83fe1e0821915758396dd4d8556ab64fff322a60c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7640381349b9bc95e78ea32e77cf9ef8882de5fdc6b885992e9c9dd7f05dc04a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7640381349b9bc95e78ea32e77cf9ef8882de5fdc6b885992e9c9dd7f05dc04a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7640381349b9bc95e78ea32e77cf9ef8882de5fdc6b885992e9c9dd7f05dc04a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "first-128130",
	                "Source": "/var/lib/docker/volumes/first-128130/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "first-128130",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "first-128130",
	                "name.minikube.sigs.k8s.io": "first-128130",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ca8680292156d2c777163c153f2ebc9670c468f0009e2aa30202fc80485e8b21",
	            "SandboxKey": "/var/run/docker/netns/ca8680292156",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32828"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32829"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32832"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32830"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32831"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "first-128130": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:a0:1d:4e:58:e5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1d711adb797271e16f74cecbaeeefc6da414d82ac5b54a4a7388f1d43358e1c3",
	                    "EndpointID": "c17461f9e744bfdb474cd809085fb6988d907cce47a887a7aa86e55b9c2b5eaf",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "first-128130",
	                        "61954a50395a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p first-128130 -n first-128130
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p first-128130 -n first-128130: exit status 6 (296.020473ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 19:13:05.988648  121318 status.go:458] kubeconfig endpoint: get endpoint: "first-128130" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMinikubeProfile FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMinikubeProfile]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p first-128130 logs -n 25
helpers_test.go:260: TestMinikubeProfile logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬──────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                          ARGS                                                           │         PROFILE          │   USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼──────────┼─────────┼─────────────────────┼─────────────────────┤
	│ node    │ ha-422561 node delete m03 --alsologtostderr -v 5                                                                        │ ha-422561                │ jenkins  │ v1.37.0 │ 03 Oct 25 18:47 UTC │                     │
	│ stop    │ ha-422561 stop --alsologtostderr -v 5                                                                                   │ ha-422561                │ jenkins  │ v1.37.0 │ 03 Oct 25 18:47 UTC │ 03 Oct 25 18:48 UTC │
	│ start   │ ha-422561 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                            │ ha-422561                │ jenkins  │ v1.37.0 │ 03 Oct 25 18:48 UTC │                     │
	│ node    │ ha-422561 node add --control-plane --alsologtostderr -v 5                                                               │ ha-422561                │ jenkins  │ v1.37.0 │ 03 Oct 25 18:54 UTC │                     │
	│ delete  │ -p ha-422561                                                                                                            │ ha-422561                │ jenkins  │ v1.37.0 │ 03 Oct 25 18:54 UTC │ 03 Oct 25 18:54 UTC │
	│ start   │ -p json-output-553665 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio │ json-output-553665       │ testUser │ v1.37.0 │ 03 Oct 25 18:54 UTC │                     │
	│ pause   │ -p json-output-553665 --output=json --user=testUser                                                                     │ json-output-553665       │ testUser │ v1.37.0 │ 03 Oct 25 19:02 UTC │ 03 Oct 25 19:02 UTC │
	│ unpause │ -p json-output-553665 --output=json --user=testUser                                                                     │ json-output-553665       │ testUser │ v1.37.0 │ 03 Oct 25 19:02 UTC │ 03 Oct 25 19:02 UTC │
	│ stop    │ -p json-output-553665 --output=json --user=testUser                                                                     │ json-output-553665       │ testUser │ v1.37.0 │ 03 Oct 25 19:02 UTC │ 03 Oct 25 19:02 UTC │
	│ delete  │ -p json-output-553665                                                                                                   │ json-output-553665       │ jenkins  │ v1.37.0 │ 03 Oct 25 19:02 UTC │ 03 Oct 25 19:02 UTC │
	│ start   │ -p json-output-error-023222 --memory=3072 --output=json --wait=true --driver=fail                                       │ json-output-error-023222 │ jenkins  │ v1.37.0 │ 03 Oct 25 19:02 UTC │                     │
	│ delete  │ -p json-output-error-023222                                                                                             │ json-output-error-023222 │ jenkins  │ v1.37.0 │ 03 Oct 25 19:02 UTC │ 03 Oct 25 19:02 UTC │
	│ start   │ -p docker-network-781121 --network=                                                                                     │ docker-network-781121    │ jenkins  │ v1.37.0 │ 03 Oct 25 19:02 UTC │ 03 Oct 25 19:03 UTC │
	│ delete  │ -p docker-network-781121                                                                                                │ docker-network-781121    │ jenkins  │ v1.37.0 │ 03 Oct 25 19:03 UTC │ 03 Oct 25 19:03 UTC │
	│ start   │ -p docker-network-171010 --network=bridge                                                                               │ docker-network-171010    │ jenkins  │ v1.37.0 │ 03 Oct 25 19:03 UTC │ 03 Oct 25 19:03 UTC │
	│ delete  │ -p docker-network-171010                                                                                                │ docker-network-171010    │ jenkins  │ v1.37.0 │ 03 Oct 25 19:03 UTC │ 03 Oct 25 19:03 UTC │
	│ start   │ -p existing-network-618191 --network=existing-network                                                                   │ existing-network-618191  │ jenkins  │ v1.37.0 │ 03 Oct 25 19:03 UTC │ 03 Oct 25 19:03 UTC │
	│ delete  │ -p existing-network-618191                                                                                              │ existing-network-618191  │ jenkins  │ v1.37.0 │ 03 Oct 25 19:03 UTC │ 03 Oct 25 19:03 UTC │
	│ start   │ -p custom-subnet-077545 --subnet=192.168.60.0/24                                                                        │ custom-subnet-077545     │ jenkins  │ v1.37.0 │ 03 Oct 25 19:03 UTC │ 03 Oct 25 19:04 UTC │
	│ delete  │ -p custom-subnet-077545                                                                                                 │ custom-subnet-077545     │ jenkins  │ v1.37.0 │ 03 Oct 25 19:04 UTC │ 03 Oct 25 19:04 UTC │
	│ start   │ -p static-ip-730718 --static-ip=192.168.200.200                                                                         │ static-ip-730718         │ jenkins  │ v1.37.0 │ 03 Oct 25 19:04 UTC │ 03 Oct 25 19:04 UTC │
	│ ip      │ static-ip-730718 ip                                                                                                     │ static-ip-730718         │ jenkins  │ v1.37.0 │ 03 Oct 25 19:04 UTC │ 03 Oct 25 19:04 UTC │
	│ delete  │ -p static-ip-730718                                                                                                     │ static-ip-730718         │ jenkins  │ v1.37.0 │ 03 Oct 25 19:04 UTC │ 03 Oct 25 19:04 UTC │
	│ start   │ -p first-128130 --driver=docker  --container-runtime=crio                                                               │ first-128130             │ jenkins  │ v1.37.0 │ 03 Oct 25 19:04 UTC │                     │
	│ delete  │ -p second-131490                                                                                                        │ second-131490            │ jenkins  │ v1.37.0 │ 03 Oct 25 19:13 UTC │ 03 Oct 25 19:13 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴──────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 19:04:48
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 19:04:48.240272  116240 out.go:360] Setting OutFile to fd 1 ...
	I1003 19:04:48.240553  116240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:04:48.240557  116240 out.go:374] Setting ErrFile to fd 2...
	I1003 19:04:48.240560  116240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:04:48.240796  116240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 19:04:48.241273  116240 out.go:368] Setting JSON to false
	I1003 19:04:48.242216  116240 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6439,"bootTime":1759511849,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 19:04:48.242275  116240 start.go:140] virtualization: kvm guest
	I1003 19:04:48.244574  116240 out.go:179] * [first-128130] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 19:04:48.245832  116240 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 19:04:48.245838  116240 notify.go:220] Checking for updates...
	I1003 19:04:48.248118  116240 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 19:04:48.249245  116240 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 19:04:48.250295  116240 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 19:04:48.252374  116240 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 19:04:48.256152  116240 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 19:04:48.257646  116240 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 19:04:48.280239  116240 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 19:04:48.280343  116240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:04:48.335410  116240 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 19:04:48.324993211 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 19:04:48.335512  116240 docker.go:318] overlay module found
	I1003 19:04:48.338307  116240 out.go:179] * Using the docker driver based on user configuration
	I1003 19:04:48.339517  116240 start.go:304] selected driver: docker
	I1003 19:04:48.339526  116240 start.go:924] validating driver "docker" against <nil>
	I1003 19:04:48.339536  116240 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 19:04:48.339628  116240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:04:48.393955  116240 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-03 19:04:48.384460023 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 19:04:48.394127  116240 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1003 19:04:48.394632  116240 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1003 19:04:48.394782  116240 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1003 19:04:48.397015  116240 out.go:179] * Using Docker driver with root privileges
	I1003 19:04:48.398539  116240 cni.go:84] Creating CNI manager for ""
	I1003 19:04:48.398603  116240 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:04:48.398612  116240 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 19:04:48.398694  116240 start.go:348] cluster config:
	{Name:first-128130 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-128130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:04:48.400047  116240 out.go:179] * Starting "first-128130" primary control-plane node in "first-128130" cluster
	I1003 19:04:48.401143  116240 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 19:04:48.402324  116240 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 19:04:48.403333  116240 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:04:48.403363  116240 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 19:04:48.403369  116240 cache.go:58] Caching tarball of preloaded images
	I1003 19:04:48.403433  116240 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 19:04:48.403455  116240 preload.go:233] Found /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1003 19:04:48.403462  116240 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 19:04:48.403768  116240 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/first-128130/config.json ...
	I1003 19:04:48.403783  116240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/first-128130/config.json: {Name:mk5ac010734a18ffc6c4ee28a920d9f791f35d04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:04:48.423887  116240 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 19:04:48.423905  116240 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 19:04:48.423920  116240 cache.go:232] Successfully downloaded all kic artifacts
	I1003 19:04:48.423941  116240 start.go:360] acquireMachinesLock for first-128130: {Name:mk0bdbf1da5479b96f0ba182e79003cee786d89c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:04:48.424055  116240 start.go:364] duration metric: took 100.632µs to acquireMachinesLock for "first-128130"
	I1003 19:04:48.424074  116240 start.go:93] Provisioning new machine with config: &{Name:first-128130 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-128130 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 19:04:48.424135  116240 start.go:125] createHost starting for "" (driver="docker")
	I1003 19:04:48.426003  116240 out.go:252] * Creating docker container (CPUs=2, Memory=8000MB) ...
	I1003 19:04:48.426227  116240 start.go:159] libmachine.API.Create for "first-128130" (driver="docker")
	I1003 19:04:48.426248  116240 client.go:168] LocalClient.Create starting
	I1003 19:04:48.426316  116240 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem
	I1003 19:04:48.426344  116240 main.go:141] libmachine: Decoding PEM data...
	I1003 19:04:48.426355  116240 main.go:141] libmachine: Parsing certificate...
	I1003 19:04:48.426406  116240 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem
	I1003 19:04:48.426422  116240 main.go:141] libmachine: Decoding PEM data...
	I1003 19:04:48.426428  116240 main.go:141] libmachine: Parsing certificate...
	I1003 19:04:48.426735  116240 cli_runner.go:164] Run: docker network inspect first-128130 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 19:04:48.443122  116240 cli_runner.go:211] docker network inspect first-128130 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 19:04:48.443186  116240 network_create.go:284] running [docker network inspect first-128130] to gather additional debugging logs...
	I1003 19:04:48.443199  116240 cli_runner.go:164] Run: docker network inspect first-128130
	W1003 19:04:48.459832  116240 cli_runner.go:211] docker network inspect first-128130 returned with exit code 1
	I1003 19:04:48.459851  116240 network_create.go:287] error running [docker network inspect first-128130]: docker network inspect first-128130: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network first-128130 not found
	I1003 19:04:48.459862  116240 network_create.go:289] output of [docker network inspect first-128130]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network first-128130 not found
	
	** /stderr **
	I1003 19:04:48.459991  116240 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:04:48.477379  116240 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6e12de11e074 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ae:d6:72:f0:cd:be} reservation:<nil>}
	I1003 19:04:48.477761  116240 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d98480}
	I1003 19:04:48.477780  116240 network_create.go:124] attempt to create docker network first-128130 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1003 19:04:48.477826  116240 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=first-128130 first-128130
	I1003 19:04:48.534121  116240 network_create.go:108] docker network first-128130 192.168.58.0/24 created
	I1003 19:04:48.534144  116240 kic.go:121] calculated static IP "192.168.58.2" for the "first-128130" container
	I1003 19:04:48.534212  116240 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 19:04:48.550704  116240 cli_runner.go:164] Run: docker volume create first-128130 --label name.minikube.sigs.k8s.io=first-128130 --label created_by.minikube.sigs.k8s.io=true
	I1003 19:04:48.568519  116240 oci.go:103] Successfully created a docker volume first-128130
	I1003 19:04:48.568579  116240 cli_runner.go:164] Run: docker run --rm --name first-128130-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=first-128130 --entrypoint /usr/bin/test -v first-128130:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1003 19:04:48.933037  116240 oci.go:107] Successfully prepared a docker volume first-128130
	I1003 19:04:48.933081  116240 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:04:48.933100  116240 kic.go:194] Starting extracting preloaded images to volume ...
	I1003 19:04:48.933162  116240 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v first-128130:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1003 19:04:53.183122  116240 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v first-128130:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.249924786s)
	I1003 19:04:53.183150  116240 kic.go:203] duration metric: took 4.2500452s to extract preloaded images to volume ...
	W1003 19:04:53.183259  116240 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1003 19:04:53.183293  116240 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1003 19:04:53.183333  116240 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1003 19:04:53.232909  116240 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname first-128130 --name first-128130 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=first-128130 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=first-128130 --network first-128130 --ip 192.168.58.2 --volume first-128130:/var --security-opt apparmor=unconfined --memory=8000mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1003 19:04:53.490059  116240 cli_runner.go:164] Run: docker container inspect first-128130 --format={{.State.Running}}
	I1003 19:04:53.507236  116240 cli_runner.go:164] Run: docker container inspect first-128130 --format={{.State.Status}}
	I1003 19:04:53.525149  116240 cli_runner.go:164] Run: docker exec first-128130 stat /var/lib/dpkg/alternatives/iptables
	I1003 19:04:53.568114  116240 oci.go:144] the created container "first-128130" has a running status.
	I1003 19:04:53.568145  116240 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/first-128130/id_rsa...
	I1003 19:04:53.964457  116240 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21625-8669/.minikube/machines/first-128130/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1003 19:04:53.990060  116240 cli_runner.go:164] Run: docker container inspect first-128130 --format={{.State.Status}}
	I1003 19:04:54.007861  116240 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1003 19:04:54.007877  116240 kic_runner.go:114] Args: [docker exec --privileged first-128130 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1003 19:04:54.052473  116240 cli_runner.go:164] Run: docker container inspect first-128130 --format={{.State.Status}}
	I1003 19:04:54.070769  116240 machine.go:93] provisionDockerMachine start ...
	I1003 19:04:54.070837  116240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-128130
	I1003 19:04:54.088861  116240 main.go:141] libmachine: Using SSH client type: native
	I1003 19:04:54.089136  116240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1003 19:04:54.089156  116240 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 19:04:54.231873  116240 main.go:141] libmachine: SSH cmd err, output: <nil>: first-128130
	
	I1003 19:04:54.231890  116240 ubuntu.go:182] provisioning hostname "first-128130"
	I1003 19:04:54.231948  116240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-128130
	I1003 19:04:54.250075  116240 main.go:141] libmachine: Using SSH client type: native
	I1003 19:04:54.250272  116240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1003 19:04:54.250278  116240 main.go:141] libmachine: About to run SSH command:
	sudo hostname first-128130 && echo "first-128130" | sudo tee /etc/hostname
	I1003 19:04:54.402227  116240 main.go:141] libmachine: SSH cmd err, output: <nil>: first-128130
	
	I1003 19:04:54.402301  116240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-128130
	I1003 19:04:54.419339  116240 main.go:141] libmachine: Using SSH client type: native
	I1003 19:04:54.419535  116240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1003 19:04:54.419547  116240 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfirst-128130' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 first-128130/g' /etc/hosts;
				else 
					echo '127.0.1.1 first-128130' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 19:04:54.561406  116240 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 19:04:54.561427  116240 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8669/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8669/.minikube}
	I1003 19:04:54.561447  116240 ubuntu.go:190] setting up certificates
	I1003 19:04:54.561458  116240 provision.go:84] configureAuth start
	I1003 19:04:54.561514  116240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" first-128130
	I1003 19:04:54.578258  116240 provision.go:143] copyHostCerts
	I1003 19:04:54.578311  116240 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem, removing ...
	I1003 19:04:54.578316  116240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem
	I1003 19:04:54.578380  116240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/key.pem (1675 bytes)
	I1003 19:04:54.578466  116240 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem, removing ...
	I1003 19:04:54.578469  116240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem
	I1003 19:04:54.578492  116240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/ca.pem (1082 bytes)
	I1003 19:04:54.578556  116240 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem, removing ...
	I1003 19:04:54.578558  116240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem
	I1003 19:04:54.578585  116240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8669/.minikube/cert.pem (1123 bytes)
	I1003 19:04:54.578641  116240 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem org=jenkins.first-128130 san=[127.0.0.1 192.168.58.2 first-128130 localhost minikube]
	I1003 19:04:54.739175  116240 provision.go:177] copyRemoteCerts
	I1003 19:04:54.739226  116240 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 19:04:54.739259  116240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-128130
	I1003 19:04:54.757146  116240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/first-128130/id_rsa Username:docker}
	I1003 19:04:54.857849  116240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 19:04:54.876509  116240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1003 19:04:54.892917  116240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 19:04:54.909405  116240 provision.go:87] duration metric: took 347.935633ms to configureAuth
	I1003 19:04:54.909424  116240 ubuntu.go:206] setting minikube options for container-runtime
	I1003 19:04:54.909573  116240 config.go:182] Loaded profile config "first-128130": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:04:54.909669  116240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-128130
	I1003 19:04:54.928502  116240 main.go:141] libmachine: Using SSH client type: native
	I1003 19:04:54.928704  116240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1003 19:04:54.928713  116240 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 19:04:55.178410  116240 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 19:04:55.178424  116240 machine.go:96] duration metric: took 1.107644084s to provisionDockerMachine
	I1003 19:04:55.178433  116240 client.go:171] duration metric: took 6.75218101s to LocalClient.Create
	I1003 19:04:55.178449  116240 start.go:167] duration metric: took 6.752221319s to libmachine.API.Create "first-128130"
	I1003 19:04:55.178456  116240 start.go:293] postStartSetup for "first-128130" (driver="docker")
	I1003 19:04:55.178465  116240 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 19:04:55.178540  116240 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 19:04:55.178586  116240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-128130
	I1003 19:04:55.196338  116240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/first-128130/id_rsa Username:docker}
	I1003 19:04:55.298659  116240 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 19:04:55.301851  116240 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 19:04:55.301867  116240 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 19:04:55.301874  116240 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/addons for local assets ...
	I1003 19:04:55.301917  116240 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8669/.minikube/files for local assets ...
	I1003 19:04:55.301990  116240 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem -> 122122.pem in /etc/ssl/certs
	I1003 19:04:55.302070  116240 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 19:04:55.309260  116240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /etc/ssl/certs/122122.pem (1708 bytes)
	I1003 19:04:55.328036  116240 start.go:296] duration metric: took 149.568587ms for postStartSetup
	I1003 19:04:55.328332  116240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" first-128130
	I1003 19:04:55.344807  116240 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/first-128130/config.json ...
	I1003 19:04:55.345114  116240 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 19:04:55.345172  116240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-128130
	I1003 19:04:55.361706  116240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/first-128130/id_rsa Username:docker}
	I1003 19:04:55.459560  116240 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 19:04:55.464110  116240 start.go:128] duration metric: took 7.039962676s to createHost
	I1003 19:04:55.464127  116240 start.go:83] releasing machines lock for "first-128130", held for 7.04006396s
	I1003 19:04:55.464198  116240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" first-128130
	I1003 19:04:55.482126  116240 ssh_runner.go:195] Run: cat /version.json
	I1003 19:04:55.482148  116240 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 19:04:55.482172  116240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-128130
	I1003 19:04:55.482207  116240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-128130
	I1003 19:04:55.500646  116240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/first-128130/id_rsa Username:docker}
	I1003 19:04:55.501281  116240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/first-128130/id_rsa Username:docker}
	I1003 19:04:55.649694  116240 ssh_runner.go:195] Run: systemctl --version
	I1003 19:04:55.656160  116240 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 19:04:55.689251  116240 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 19:04:55.693790  116240 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 19:04:55.693843  116240 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 19:04:55.718481  116240 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 19:04:55.718493  116240 start.go:495] detecting cgroup driver to use...
	I1003 19:04:55.718518  116240 detect.go:190] detected "systemd" cgroup driver on host os
	I1003 19:04:55.718560  116240 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 19:04:55.732828  116240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 19:04:55.744359  116240 docker.go:218] disabling cri-docker service (if available) ...
	I1003 19:04:55.744397  116240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 19:04:55.760122  116240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 19:04:55.776431  116240 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 19:04:55.853908  116240 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 19:04:55.941035  116240 docker.go:234] disabling docker service ...
	I1003 19:04:55.941091  116240 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 19:04:55.959142  116240 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 19:04:55.971888  116240 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 19:04:56.053142  116240 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 19:04:56.132276  116240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 19:04:56.144324  116240 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 19:04:56.157769  116240 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 19:04:56.157809  116240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:04:56.167557  116240 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 19:04:56.167612  116240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:04:56.176080  116240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:04:56.184329  116240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:04:56.192736  116240 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 19:04:56.200513  116240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:04:56.208937  116240 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:04:56.222046  116240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:04:56.230964  116240 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 19:04:56.238288  116240 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 19:04:56.245808  116240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:04:56.324384  116240 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 19:04:56.429908  116240 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 19:04:56.429971  116240 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 19:04:56.433915  116240 start.go:563] Will wait 60s for crictl version
	I1003 19:04:56.433966  116240 ssh_runner.go:195] Run: which crictl
	I1003 19:04:56.437505  116240 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 19:04:56.460717  116240 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 19:04:56.460784  116240 ssh_runner.go:195] Run: crio --version
	I1003 19:04:56.487523  116240 ssh_runner.go:195] Run: crio --version
	I1003 19:04:56.515991  116240 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 19:04:56.517250  116240 cli_runner.go:164] Run: docker network inspect first-128130 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:04:56.535652  116240 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1003 19:04:56.539876  116240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:04:56.550372  116240 kubeadm.go:883] updating cluster {Name:first-128130 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-128130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s} ...
	I1003 19:04:56.550471  116240 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:04:56.550506  116240 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:04:56.581304  116240 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:04:56.581314  116240 crio.go:433] Images already preloaded, skipping extraction
	I1003 19:04:56.581356  116240 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:04:56.606647  116240 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:04:56.606661  116240 cache_images.go:85] Images are preloaded, skipping loading
	I1003 19:04:56.606669  116240 kubeadm.go:934] updating node { 192.168.58.2 8443 v1.34.1 crio true true} ...
	I1003 19:04:56.606805  116240 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=first-128130 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:first-128130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 19:04:56.606882  116240 ssh_runner.go:195] Run: crio config
	I1003 19:04:56.652170  116240 cni.go:84] Creating CNI manager for ""
	I1003 19:04:56.652181  116240 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:04:56.652194  116240 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 19:04:56.652217  116240 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:first-128130 NodeName:first-128130 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 19:04:56.652323  116240 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "first-128130"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.58.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 19:04:56.652385  116240 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 19:04:56.660247  116240 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 19:04:56.660317  116240 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 19:04:56.667968  116240 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1003 19:04:56.680251  116240 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 19:04:56.694922  116240 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1003 19:04:56.707839  116240 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1003 19:04:56.711448  116240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:04:56.721221  116240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:04:56.798924  116240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:04:56.824376  116240 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/first-128130 for IP: 192.168.58.2
	I1003 19:04:56.824386  116240 certs.go:195] generating shared ca certs ...
	I1003 19:04:56.824402  116240 certs.go:227] acquiring lock for ca certs: {Name:mk92d1e8e469cb44d9924ff8abf5ecf0a8ce4e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:04:56.824533  116240 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key
	I1003 19:04:56.824563  116240 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key
	I1003 19:04:56.824569  116240 certs.go:257] generating profile certs ...
	I1003 19:04:56.824618  116240 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/first-128130/client.key
	I1003 19:04:56.824626  116240 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/first-128130/client.crt with IP's: []
	I1003 19:04:57.270258  116240 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/first-128130/client.crt ...
	I1003 19:04:57.270276  116240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/first-128130/client.crt: {Name:mk52dd4ec47965620bd1fc3fc985687d48243d87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:04:57.270467  116240 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/first-128130/client.key ...
	I1003 19:04:57.270473  116240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/first-128130/client.key: {Name:mk2aa67dd6d61ad6542da486ec7c3729d5ea0793 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:04:57.270549  116240 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/first-128130/apiserver.key.875f7a03
	I1003 19:04:57.270559  116240 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/first-128130/apiserver.crt.875f7a03 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.58.2]
	I1003 19:04:57.430025  116240 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/first-128130/apiserver.crt.875f7a03 ...
	I1003 19:04:57.430040  116240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/first-128130/apiserver.crt.875f7a03: {Name:mkbc86a2fbb384d2b31e4628b4abbf0530adb781 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:04:57.430203  116240 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/first-128130/apiserver.key.875f7a03 ...
	I1003 19:04:57.430211  116240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/first-128130/apiserver.key.875f7a03: {Name:mkc4d957d13dad722267909b207d4cf4360a7b1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:04:57.430281  116240 certs.go:382] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/first-128130/apiserver.crt.875f7a03 -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/first-128130/apiserver.crt
	I1003 19:04:57.430351  116240 certs.go:386] copying /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/first-128130/apiserver.key.875f7a03 -> /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/first-128130/apiserver.key
	I1003 19:04:57.430397  116240 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/first-128130/proxy-client.key
	I1003 19:04:57.430406  116240 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/first-128130/proxy-client.crt with IP's: []
	I1003 19:04:57.726462  116240 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/first-128130/proxy-client.crt ...
	I1003 19:04:57.726486  116240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/first-128130/proxy-client.crt: {Name:mke68e7df54a7dd1a5ac2cd30e2cdecaed94aa94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:04:57.726671  116240 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/first-128130/proxy-client.key ...
	I1003 19:04:57.726676  116240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/first-128130/proxy-client.key: {Name:mkd6450f858305aefd1cbad87357636895f77952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:04:57.726851  116240 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem (1338 bytes)
	W1003 19:04:57.726880  116240 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212_empty.pem, impossibly tiny 0 bytes
	I1003 19:04:57.726885  116240 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 19:04:57.726907  116240 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/ca.pem (1082 bytes)
	I1003 19:04:57.726935  116240 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/cert.pem (1123 bytes)
	I1003 19:04:57.726952  116240 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/certs/key.pem (1675 bytes)
	I1003 19:04:57.727005  116240 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem (1708 bytes)
	I1003 19:04:57.727560  116240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 19:04:57.745322  116240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 19:04:57.762964  116240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 19:04:57.780512  116240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 19:04:57.797953  116240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/first-128130/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1003 19:04:57.814937  116240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/first-128130/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 19:04:57.832658  116240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/first-128130/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 19:04:57.849615  116240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/first-128130/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 19:04:57.867118  116240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/ssl/certs/122122.pem --> /usr/share/ca-certificates/122122.pem (1708 bytes)
	I1003 19:04:57.886812  116240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 19:04:57.903954  116240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8669/.minikube/certs/12212.pem --> /usr/share/ca-certificates/12212.pem (1338 bytes)
	I1003 19:04:57.921128  116240 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 19:04:57.933552  116240 ssh_runner.go:195] Run: openssl version
	I1003 19:04:57.939461  116240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122122.pem && ln -fs /usr/share/ca-certificates/122122.pem /etc/ssl/certs/122122.pem"
	I1003 19:04:57.948003  116240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122122.pem
	I1003 19:04:57.951845  116240 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 17:59 /usr/share/ca-certificates/122122.pem
	I1003 19:04:57.951900  116240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122122.pem
	I1003 19:04:57.987189  116240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122122.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 19:04:57.996798  116240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 19:04:58.005601  116240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:04:58.009447  116240 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:04:58.009498  116240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:04:58.044159  116240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 19:04:58.053163  116240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12212.pem && ln -fs /usr/share/ca-certificates/12212.pem /etc/ssl/certs/12212.pem"
	I1003 19:04:58.061721  116240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12212.pem
	I1003 19:04:58.065459  116240 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 17:59 /usr/share/ca-certificates/12212.pem
	I1003 19:04:58.065511  116240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12212.pem
	I1003 19:04:58.099327  116240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12212.pem /etc/ssl/certs/51391683.0"
	I1003 19:04:58.108185  116240 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 19:04:58.111742  116240 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 19:04:58.111790  116240 kubeadm.go:400] StartCluster: {Name:first-128130 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-128130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Aut
oPauseInterval:1m0s}
	I1003 19:04:58.111848  116240 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 19:04:58.111906  116240 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 19:04:58.138748  116240 cri.go:89] found id: ""
	I1003 19:04:58.138829  116240 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 19:04:58.147144  116240 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 19:04:58.155079  116240 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 19:04:58.155121  116240 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 19:04:58.162969  116240 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 19:04:58.162992  116240 kubeadm.go:157] found existing configuration files:
	
	I1003 19:04:58.163033  116240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 19:04:58.170951  116240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 19:04:58.171017  116240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 19:04:58.178512  116240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 19:04:58.186078  116240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 19:04:58.186120  116240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 19:04:58.193639  116240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 19:04:58.201343  116240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 19:04:58.201389  116240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 19:04:58.208845  116240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 19:04:58.217193  116240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 19:04:58.217232  116240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 19:04:58.224519  116240 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 19:04:58.292750  116240 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 19:04:58.349928  116240 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 19:09:02.370368  116240 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1003 19:09:02.370565  116240 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1003 19:09:02.373014  116240 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 19:09:02.373115  116240 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 19:09:02.373224  116240 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 19:09:02.373291  116240 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 19:09:02.373332  116240 kubeadm.go:318] OS: Linux
	I1003 19:09:02.373394  116240 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 19:09:02.373471  116240 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 19:09:02.373559  116240 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 19:09:02.373660  116240 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 19:09:02.373769  116240 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 19:09:02.373836  116240 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 19:09:02.373903  116240 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 19:09:02.373953  116240 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 19:09:02.374036  116240 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 19:09:02.374117  116240 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 19:09:02.374188  116240 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 19:09:02.374235  116240 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 19:09:02.377842  116240 out.go:252]   - Generating certificates and keys ...
	I1003 19:09:02.377912  116240 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 19:09:02.377966  116240 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 19:09:02.378062  116240 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 19:09:02.378142  116240 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1003 19:09:02.378220  116240 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1003 19:09:02.378284  116240 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1003 19:09:02.378362  116240 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1003 19:09:02.378477  116240 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [first-128130 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1003 19:09:02.378536  116240 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1003 19:09:02.378690  116240 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [first-128130 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1003 19:09:02.378740  116240 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 19:09:02.378791  116240 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 19:09:02.378835  116240 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1003 19:09:02.378876  116240 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 19:09:02.378914  116240 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 19:09:02.378965  116240 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 19:09:02.379030  116240 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 19:09:02.379085  116240 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 19:09:02.379129  116240 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 19:09:02.379190  116240 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 19:09:02.379240  116240 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 19:09:02.381497  116240 out.go:252]   - Booting up control plane ...
	I1003 19:09:02.381566  116240 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 19:09:02.381643  116240 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 19:09:02.381722  116240 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 19:09:02.381804  116240 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 19:09:02.381888  116240 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 19:09:02.382002  116240 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 19:09:02.382072  116240 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 19:09:02.382102  116240 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 19:09:02.382225  116240 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 19:09:02.382367  116240 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 19:09:02.382422  116240 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.88782ms
	I1003 19:09:02.382500  116240 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 19:09:02.382597  116240 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	I1003 19:09:02.382716  116240 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 19:09:02.382820  116240 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 19:09:02.382893  116240 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000215615s
	I1003 19:09:02.382956  116240 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000301356s
	I1003 19:09:02.383037  116240 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000719315s
	I1003 19:09:02.383041  116240 kubeadm.go:318] 
	I1003 19:09:02.383117  116240 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 19:09:02.383244  116240 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 19:09:02.383326  116240 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 19:09:02.383448  116240 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 19:09:02.383543  116240 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 19:09:02.383667  116240 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 19:09:02.383723  116240 kubeadm.go:318] 
	W1003 19:09:02.383808  116240 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [first-128130 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [first-128130 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.88782ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000215615s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000301356s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000719315s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1003 19:09:02.383908  116240 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1003 19:09:02.836115  116240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:09:02.848215  116240 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 19:09:02.848257  116240 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 19:09:02.856082  116240 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 19:09:02.856092  116240 kubeadm.go:157] found existing configuration files:
	
	I1003 19:09:02.856134  116240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 19:09:02.863791  116240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 19:09:02.863831  116240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 19:09:02.870971  116240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 19:09:02.878064  116240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 19:09:02.878108  116240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 19:09:02.884780  116240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 19:09:02.892513  116240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 19:09:02.892550  116240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 19:09:02.899613  116240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 19:09:02.906618  116240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 19:09:02.906652  116240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 19:09:02.913318  116240 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 19:09:02.967535  116240 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1003 19:09:03.024194  116240 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 19:13:05.024175  116240 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1003 19:13:05.024375  116240 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1003 19:13:05.026386  116240 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 19:13:05.026427  116240 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 19:13:05.026528  116240 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 19:13:05.026589  116240 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1003 19:13:05.026620  116240 kubeadm.go:318] OS: Linux
	I1003 19:13:05.026681  116240 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 19:13:05.026728  116240 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 19:13:05.026765  116240 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 19:13:05.026800  116240 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 19:13:05.026855  116240 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 19:13:05.026902  116240 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 19:13:05.026938  116240 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 19:13:05.027010  116240 kubeadm.go:318] CGROUPS_IO: enabled
	I1003 19:13:05.027112  116240 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 19:13:05.027204  116240 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 19:13:05.027280  116240 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 19:13:05.027330  116240 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 19:13:05.029685  116240 out.go:252]   - Generating certificates and keys ...
	I1003 19:13:05.029775  116240 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 19:13:05.029847  116240 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 19:13:05.029925  116240 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1003 19:13:05.030000  116240 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1003 19:13:05.030077  116240 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1003 19:13:05.030121  116240 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1003 19:13:05.030170  116240 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1003 19:13:05.030215  116240 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1003 19:13:05.030283  116240 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1003 19:13:05.030338  116240 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1003 19:13:05.030383  116240 kubeadm.go:318] [certs] Using the existing "sa" key
	I1003 19:13:05.030434  116240 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 19:13:05.030473  116240 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 19:13:05.030520  116240 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 19:13:05.030559  116240 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 19:13:05.030609  116240 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 19:13:05.030650  116240 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 19:13:05.030717  116240 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 19:13:05.030769  116240 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 19:13:05.032225  116240 out.go:252]   - Booting up control plane ...
	I1003 19:13:05.032423  116240 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 19:13:05.032489  116240 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 19:13:05.032561  116240 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 19:13:05.032685  116240 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 19:13:05.032794  116240 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 19:13:05.032881  116240 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 19:13:05.032955  116240 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 19:13:05.032995  116240 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 19:13:05.033106  116240 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 19:13:05.033187  116240 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 19:13:05.033259  116240 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.206893ms
	I1003 19:13:05.033338  116240 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 19:13:05.033402  116240 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	I1003 19:13:05.033469  116240 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 19:13:05.033527  116240 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 19:13:05.033584  116240 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000183802s
	I1003 19:13:05.033644  116240 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000227224s
	I1003 19:13:05.033715  116240 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000450328s
	I1003 19:13:05.033718  116240 kubeadm.go:318] 
	I1003 19:13:05.033786  116240 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 19:13:05.033880  116240 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 19:13:05.033969  116240 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 19:13:05.034050  116240 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 19:13:05.034110  116240 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 19:13:05.034177  116240 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 19:13:05.034194  116240 kubeadm.go:318] 
	I1003 19:13:05.034238  116240 kubeadm.go:402] duration metric: took 8m6.922452435s to StartCluster
	I1003 19:13:05.034275  116240 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 19:13:05.034322  116240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 19:13:05.059151  116240 cri.go:89] found id: ""
	I1003 19:13:05.059178  116240 logs.go:282] 0 containers: []
	W1003 19:13:05.059187  116240 logs.go:284] No container was found matching "kube-apiserver"
	I1003 19:13:05.059194  116240 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 19:13:05.059258  116240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 19:13:05.084775  116240 cri.go:89] found id: ""
	I1003 19:13:05.084790  116240 logs.go:282] 0 containers: []
	W1003 19:13:05.084798  116240 logs.go:284] No container was found matching "etcd"
	I1003 19:13:05.084804  116240 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 19:13:05.084863  116240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 19:13:05.110327  116240 cri.go:89] found id: ""
	I1003 19:13:05.110343  116240 logs.go:282] 0 containers: []
	W1003 19:13:05.110350  116240 logs.go:284] No container was found matching "coredns"
	I1003 19:13:05.110355  116240 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 19:13:05.110406  116240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 19:13:05.135451  116240 cri.go:89] found id: ""
	I1003 19:13:05.135467  116240 logs.go:282] 0 containers: []
	W1003 19:13:05.135475  116240 logs.go:284] No container was found matching "kube-scheduler"
	I1003 19:13:05.135481  116240 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 19:13:05.135544  116240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 19:13:05.160184  116240 cri.go:89] found id: ""
	I1003 19:13:05.160199  116240 logs.go:282] 0 containers: []
	W1003 19:13:05.160205  116240 logs.go:284] No container was found matching "kube-proxy"
	I1003 19:13:05.160209  116240 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 19:13:05.160253  116240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 19:13:05.185898  116240 cri.go:89] found id: ""
	I1003 19:13:05.185912  116240 logs.go:282] 0 containers: []
	W1003 19:13:05.185926  116240 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 19:13:05.185931  116240 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 19:13:05.186008  116240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 19:13:05.210720  116240 cri.go:89] found id: ""
	I1003 19:13:05.210735  116240 logs.go:282] 0 containers: []
	W1003 19:13:05.210741  116240 logs.go:284] No container was found matching "kindnet"
	I1003 19:13:05.210749  116240 logs.go:123] Gathering logs for CRI-O ...
	I1003 19:13:05.210762  116240 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 19:13:05.271464  116240 logs.go:123] Gathering logs for container status ...
	I1003 19:13:05.271483  116240 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 19:13:05.299430  116240 logs.go:123] Gathering logs for kubelet ...
	I1003 19:13:05.299447  116240 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 19:13:05.365751  116240 logs.go:123] Gathering logs for dmesg ...
	I1003 19:13:05.365770  116240 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 19:13:05.377801  116240 logs.go:123] Gathering logs for describe nodes ...
	I1003 19:13:05.377816  116240 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 19:13:05.434656  116240 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 19:13:05.427607    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:13:05.428206    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:13:05.429776    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:13:05.430225    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:13:05.431925    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 19:13:05.427607    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:13:05.428206    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:13:05.429776    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:13:05.430225    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:13:05.431925    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1003 19:13:05.434670  116240 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.206893ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000183802s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000227224s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000450328s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1003 19:13:05.434705  116240 out.go:285] * 
	W1003 19:13:05.434773  116240 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.206893ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000183802s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000227224s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000450328s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 19:13:05.434788  116240 out.go:285] * 
	W1003 19:13:05.436422  116240 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 19:13:05.439735  116240 out.go:203] 
	W1003 19:13:05.440729  116240 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.206893ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000183802s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000227224s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000450328s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 19:13:05.440755  116240 out.go:285] * 
	I1003 19:13:05.442111  116240 out.go:203] 
	
	
	==> CRI-O <==
	Oct 03 19:13:02 first-128130 crio[771]: time="2025-10-03T19:13:02.942116374Z" level=info msg="createCtr: removing container 2fe5177cad245a6d30a5d1becbec0ccaf6e12fbfa930e279905bb5458b939ba2" id=65f066ab-786f-4cfb-83d9-940e1e0d2a75 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:13:02 first-128130 crio[771]: time="2025-10-03T19:13:02.942144401Z" level=info msg="createCtr: deleting container 2fe5177cad245a6d30a5d1becbec0ccaf6e12fbfa930e279905bb5458b939ba2 from storage" id=65f066ab-786f-4cfb-83d9-940e1e0d2a75 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:13:02 first-128130 crio[771]: time="2025-10-03T19:13:02.944407116Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-first-128130_kube-system_2fbd531be1d5a547833c7ec24ab673f9_0" id=65f066ab-786f-4cfb-83d9-940e1e0d2a75 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:13:03 first-128130 crio[771]: time="2025-10-03T19:13:03.917304006Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=c656a314-c513-4af2-bc14-f3ff51258221 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:13:03 first-128130 crio[771]: time="2025-10-03T19:13:03.917310487Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=aae1053a-9e4f-436e-b53b-cf3c87e75cec name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:13:03 first-128130 crio[771]: time="2025-10-03T19:13:03.918249496Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=76cb3d7a-5c42-4198-8a8d-4b80745294cf name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:13:03 first-128130 crio[771]: time="2025-10-03T19:13:03.918251271Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=983cbbda-8edd-45d4-826b-3040c0df5ed6 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:13:03 first-128130 crio[771]: time="2025-10-03T19:13:03.91981711Z" level=info msg="Creating container: kube-system/etcd-first-128130/etcd" id=ea1282c5-a865-4276-bce2-8feaa1d6153f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:13:03 first-128130 crio[771]: time="2025-10-03T19:13:03.91994474Z" level=info msg="Creating container: kube-system/kube-scheduler-first-128130/kube-scheduler" id=d24b0e53-7b69-4379-9a1e-582914628281 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:13:03 first-128130 crio[771]: time="2025-10-03T19:13:03.920059222Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:13:03 first-128130 crio[771]: time="2025-10-03T19:13:03.920185872Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:13:03 first-128130 crio[771]: time="2025-10-03T19:13:03.924206117Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:13:03 first-128130 crio[771]: time="2025-10-03T19:13:03.924646386Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:13:03 first-128130 crio[771]: time="2025-10-03T19:13:03.926079479Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:13:03 first-128130 crio[771]: time="2025-10-03T19:13:03.926594803Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:13:03 first-128130 crio[771]: time="2025-10-03T19:13:03.943176366Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ea1282c5-a865-4276-bce2-8feaa1d6153f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:13:03 first-128130 crio[771]: time="2025-10-03T19:13:03.944336052Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=d24b0e53-7b69-4379-9a1e-582914628281 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:13:03 first-128130 crio[771]: time="2025-10-03T19:13:03.944543589Z" level=info msg="createCtr: deleting container ID 2c64f292c52240342a9f99833a5d1f11ae225222e74407113b04c07caced5088 from idIndex" id=ea1282c5-a865-4276-bce2-8feaa1d6153f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:13:03 first-128130 crio[771]: time="2025-10-03T19:13:03.944578909Z" level=info msg="createCtr: removing container 2c64f292c52240342a9f99833a5d1f11ae225222e74407113b04c07caced5088" id=ea1282c5-a865-4276-bce2-8feaa1d6153f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:13:03 first-128130 crio[771]: time="2025-10-03T19:13:03.944615467Z" level=info msg="createCtr: deleting container 2c64f292c52240342a9f99833a5d1f11ae225222e74407113b04c07caced5088 from storage" id=ea1282c5-a865-4276-bce2-8feaa1d6153f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:13:03 first-128130 crio[771]: time="2025-10-03T19:13:03.945715109Z" level=info msg="createCtr: deleting container ID 1f998986c3bbacf9a4472a07fad6e046d691cb05a563fe4d2b02f7ff41bf62d3 from idIndex" id=d24b0e53-7b69-4379-9a1e-582914628281 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:13:03 first-128130 crio[771]: time="2025-10-03T19:13:03.945746184Z" level=info msg="createCtr: removing container 1f998986c3bbacf9a4472a07fad6e046d691cb05a563fe4d2b02f7ff41bf62d3" id=d24b0e53-7b69-4379-9a1e-582914628281 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:13:03 first-128130 crio[771]: time="2025-10-03T19:13:03.945770788Z" level=info msg="createCtr: deleting container 1f998986c3bbacf9a4472a07fad6e046d691cb05a563fe4d2b02f7ff41bf62d3 from storage" id=d24b0e53-7b69-4379-9a1e-582914628281 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:13:03 first-128130 crio[771]: time="2025-10-03T19:13:03.948256596Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-first-128130_kube-system_7e492c5cb8de04398578981d70386ede_0" id=ea1282c5-a865-4276-bce2-8feaa1d6153f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:13:03 first-128130 crio[771]: time="2025-10-03T19:13:03.948624235Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-first-128130_kube-system_68447e5c2f1b5895019c86cb0c0a3e32_0" id=d24b0e53-7b69-4379-9a1e-582914628281 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 19:13:06.556828    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:13:06.557335    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:13:06.558904    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:13:06.559306    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:13:06.560895    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 3 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001870] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.374530] i8042: Warning: Keylock active
	[  +0.010846] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003424] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000658] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000637] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000691] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.479345] block sda: the capability attribute has been deprecated.
	[  +0.086934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025583] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.992810] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:13:06 up  1:55,  0 user,  load average: 0.00, 0.11, 0.16
	Linux first-128130 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 03 19:13:02 first-128130 kubelet[1784]: E1003 19:13:02.917123    1784 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"first-128130\" not found" node="first-128130"
	Oct 03 19:13:02 first-128130 kubelet[1784]: E1003 19:13:02.944681    1784 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 19:13:02 first-128130 kubelet[1784]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 19:13:02 first-128130 kubelet[1784]:  > podSandboxID="a220358a33c7620293e66921f376bf5f10d630c6c7c0fe844c5e9dca3264242c"
	Oct 03 19:13:02 first-128130 kubelet[1784]: E1003 19:13:02.944771    1784 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 19:13:02 first-128130 kubelet[1784]:         container kube-apiserver start failed in pod kube-apiserver-first-128130_kube-system(2fbd531be1d5a547833c7ec24ab673f9): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 19:13:02 first-128130 kubelet[1784]:  > logger="UnhandledError"
	Oct 03 19:13:02 first-128130 kubelet[1784]: E1003 19:13:02.944799    1784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-first-128130" podUID="2fbd531be1d5a547833c7ec24ab673f9"
	Oct 03 19:13:03 first-128130 kubelet[1784]: E1003 19:13:03.916894    1784 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"first-128130\" not found" node="first-128130"
	Oct 03 19:13:03 first-128130 kubelet[1784]: E1003 19:13:03.917021    1784 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"first-128130\" not found" node="first-128130"
	Oct 03 19:13:03 first-128130 kubelet[1784]: E1003 19:13:03.948513    1784 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 19:13:03 first-128130 kubelet[1784]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 19:13:03 first-128130 kubelet[1784]:  > podSandboxID="05f8893f11f3cf3b5492213cc53683ccf34d89cbeff9f7b4e0bcd04a4e748998"
	Oct 03 19:13:03 first-128130 kubelet[1784]: E1003 19:13:03.948604    1784 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 19:13:03 first-128130 kubelet[1784]:         container etcd start failed in pod etcd-first-128130_kube-system(7e492c5cb8de04398578981d70386ede): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 19:13:03 first-128130 kubelet[1784]:  > logger="UnhandledError"
	Oct 03 19:13:03 first-128130 kubelet[1784]: E1003 19:13:03.948632    1784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-first-128130" podUID="7e492c5cb8de04398578981d70386ede"
	Oct 03 19:13:03 first-128130 kubelet[1784]: E1003 19:13:03.948841    1784 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 19:13:03 first-128130 kubelet[1784]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 19:13:03 first-128130 kubelet[1784]:  > podSandboxID="64c9b88a5d26f164c99829bf79bf7269320fc8786e49e9d1c0b3a98a11bdcbe0"
	Oct 03 19:13:03 first-128130 kubelet[1784]: E1003 19:13:03.948923    1784 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 19:13:03 first-128130 kubelet[1784]:         container kube-scheduler start failed in pod kube-scheduler-first-128130_kube-system(68447e5c2f1b5895019c86cb0c0a3e32): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 19:13:03 first-128130 kubelet[1784]:  > logger="UnhandledError"
	Oct 03 19:13:03 first-128130 kubelet[1784]: E1003 19:13:03.950062    1784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-first-128130" podUID="68447e5c2f1b5895019c86cb0c0a3e32"
	Oct 03 19:13:04 first-128130 kubelet[1784]: E1003 19:13:04.933226    1784 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"first-128130\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p first-128130 -n first-128130
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p first-128130 -n first-128130: exit status 6 (305.422355ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 19:13:06.940763  121658 status.go:458] kubeconfig endpoint: get endpoint: "first-128130" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "first-128130" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "first-128130" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-128130
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-128130: (1.88843713s)
--- FAIL: TestMinikubeProfile (500.66s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (7200.059s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-460306
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-460306-m01 --driver=docker  --container-runtime=crio
E1003 19:38:14.922480   12212 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:41:51.838465   12212 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/functional-889240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic: test timed out after 2h0m0s
	running tests:
		TestMultiNode (28m55s)
		TestMultiNode/serial (28m55s)
		TestMultiNode/serial/ValidateNameConflict (5m20s)

                                                
                                                
goroutine 2122 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2484 +0x394
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 29 minutes]:
testing.(*T).Run(0xc000102e00, {0x32034e4?, 0xc0008d1a78?}, 0x3c51c70)
	/usr/local/go/src/testing/testing.go:1859 +0x431
testing.runTests.func1(0xc000102e00)
	/usr/local/go/src/testing/testing.go:2279 +0x37
testing.tRunner(0xc000102e00, 0xc0008d1bb8)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
testing.runTests(0xc0006141c8, {0x5c616c0, 0x2b, 0x2b}, {0xffffffffffffffff?, 0xc00036e000?, 0x5c89dc0?})
	/usr/local/go/src/testing/testing.go:2277 +0x4b4
testing.(*M).Run(0xc000b12b40)
	/usr/local/go/src/testing/testing.go:2142 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc000b12b40)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:64 +0x105
main.main()
	_testmain.go:131 +0xa8

                                                
                                                
goroutine 111 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc000505180)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000505180)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x59
k8s.io/minikube/test/integration.TestCertExpiration(0xc000505180)
	/home/jenkins/workspace/Build_Cross/test/integration/cert_options_test.go:115 +0x39
testing.tRunner(0xc000505180, 0x3c51b88)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 102 [chan receive, 119 minutes]:
testing.(*T).Parallel(0xc000602380)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000602380)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x59
k8s.io/minikube/test/integration.TestOffline(0xc000602380)
	/home/jenkins/workspace/Build_Cross/test/integration/aab_offline_test.go:32 +0x39
testing.tRunner(0xc000602380, 0x3c51c88)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 110 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc000504c40)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000504c40)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x59
k8s.io/minikube/test/integration.TestCertOptions(0xc000504c40)
	/home/jenkins/workspace/Build_Cross/test/integration/cert_options_test.go:36 +0xb3
testing.tRunner(0xc000504c40, 0x3c51b90)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 227 [IO wait, 102 minutes]:
internal/poll.runtime_pollWait(0x7dc49db35968, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc000427000?, 0x900000036?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000427000)
	/usr/local/go/src/internal/poll/fd_unix.go:620 +0x295
net.(*netFD).accept(0xc000427000)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0006a76c0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1b
net.(*TCPListener).Accept(0xc0006a76c0)
	/usr/local/go/src/net/tcpsock.go:380 +0x30
net/http.(*Server).Serve(0xc0001ff100, {0x3f9ba90, 0xc0006a76c0})
	/usr/local/go/src/net/http/server.go:3424 +0x30c
net/http.(*Server).ListenAndServe(0xc0001ff100)
	/usr/local/go/src/net/http/server.go:3350 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 208
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x129

                                                
                                                
goroutine 1860 [chan receive, 29 minutes]:
testing.(*T).Run(0xc001989340, {0x31f3134?, 0x1a3185c5000?}, 0xc000ad89c0)
	/usr/local/go/src/testing/testing.go:1859 +0x431
k8s.io/minikube/test/integration.TestMultiNode(0xc001989340)
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:59 +0x3c5
testing.tRunner(0xc001989340, 0x3c51c70)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 162 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc0005056c0)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc0005056c0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x59
k8s.io/minikube/test/integration.TestForceSystemdEnv(0xc0005056c0)
	/home/jenkins/workspace/Build_Cross/test/integration/docker_test.go:146 +0xb3
testing.tRunner(0xc0005056c0, 0x3c51bd0)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 113 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc000505500)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000505500)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x59
k8s.io/minikube/test/integration.TestForceSystemdFlag(0xc000505500)
	/home/jenkins/workspace/Build_Cross/test/integration/docker_test.go:83 +0xb3
testing.tRunner(0xc000505500, 0x3c51bd8)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 643 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3fc0c20, {{0x3fb5c48, 0xc0002483c0?}, 0x6873732033363038?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x378
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 411
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x272

                                                
                                                
goroutine 644 [chan receive, 75 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0xc0007fbda0, 0xc0006b0070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x295
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 411
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x614

                                                
                                                
goroutine 694 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc002208900, 0xc0006b1b20)
	/usr/local/go/src/os/exec/exec.go:814 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 402
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 602 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3fae530, 0xc0006b0070}, 0xc0014c3f50, 0xc001b92f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3fae530, 0xc0006b0070}, 0x90?, 0xc0014c3f50, 0xc0014c3f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3fae530?, 0xc0006b0070?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0014c3fd0?, 0x5932a4?, 0xc0008f9f80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 644
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x286

                                                
                                                
goroutine 2119 [IO wait, 5 minutes]:
internal/poll.runtime_pollWait(0x7dc49db35dc8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc000b4c660?, 0xc00067da8d?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000b4c660, {0xc00067da8d, 0x573, 0x573})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0006a4098, {0xc00067da8d?, 0x41835f?, 0x2c42f20?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc000ac8390, {0x3f63940, 0xc00040e070})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f63ac0, 0xc000ac8390}, {0x3f63940, 0xc00040e070}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0006a4098?, {0x3f63ac0, 0xc000ac8390})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc0006a4098, {0x3f63ac0, 0xc000ac8390})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3f63ac0, 0xc000ac8390}, {0x3f639c0, 0xc0006a4098}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc0008f9340?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2099
	/usr/local/go/src/os/exec/exec.go:748 +0x92b

                                                
                                                
goroutine 678 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc000698180, 0xc000b003f0)
	/usr/local/go/src/os/exec/exec.go:814 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 677
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 2099 [syscall, 5 minutes]:
syscall.Syscall6(0xf7, 0x3, 0xe, 0xc0008d3a08, 0x4, 0xc001b265a0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:95 +0x39
internal/syscall/unix.Waitid(0xc0008d3a36?, 0xc0008d3b60?, 0x5930ab?, 0x7ffd4fa901af?, 0x0?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x39
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:106
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:251
os.(*Process).pidfdWait(0xc000116600?)
	/usr/local/go/src/os/pidfd_linux.go:105 +0x209
os.(*Process).wait(0x5c8c460?)
	/usr/local/go/src/os/exec_unix.go:27 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc000698300)
	/usr/local/go/src/os/exec/exec.go:922 +0x45
os/exec.(*Cmd).Run(0xc000698300)
	/usr/local/go/src/os/exec/exec.go:626 +0x2d
k8s.io/minikube/test/integration.Run(0xc000b5d880, 0xc000698300)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateNameConflict({0x3fae1b0, 0xc00033a230}, 0xc000b5d880, {0xc00034a2f0, 0x10})
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:464 +0x48d
k8s.io/minikube/test/integration.TestMultiNode.func1.1(0xc000b5d880?)
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:86 +0x6b
testing.tRunner(0xc000b5d880, 0xc000892040)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1844
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 601 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0006a7b10, 0x23)
	/usr/local/go/src/runtime/sema.go:597 +0x159
sync.(*Cond).Wait(0xc001b97ce0?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3fc4020)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0007fbda0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0xc0006e8480?, 0xc001801920?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x13
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x3fae530?, 0xc0006b0070?}, 0x41b1b4?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x51
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x3fae530, 0xc0006b0070}, 0xc001b97f50, {0x3f65540, 0xc00146ed20}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xe5
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x3f65540?, 0xc00146ed20?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x46
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000ae98b0, 0x3b9aca00, 0x0, 0x1, 0xc0006b0070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 644
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x1d9

                                                
                                                
goroutine 572 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc00017ec00, 0xc000084930)
	/usr/local/go/src/os/exec/exec.go:814 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 571
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 603 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 602
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1844 [chan receive, 5 minutes]:
testing.(*T).Run(0xc001482e00, {0x321805f?, 0x40965a4?}, 0xc000892040)
	/usr/local/go/src/testing/testing.go:1859 +0x431
k8s.io/minikube/test/integration.TestMultiNode.func1(0xc001482e00)
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:84 +0x17d
testing.tRunner(0xc001482e00, 0xc000ad89c0)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1860
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 2120 [IO wait]:
internal/poll.runtime_pollWait(0x7dc49db351c0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc000b4c780?, 0xc0002df769?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000b4c780, {0xc0002df769, 0x897, 0x897})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0006a40b0, {0xc0002df769?, 0x41835f?, 0x2c42f20?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc000ac8480, {0x3f63940, 0xc0000bc4d0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f63ac0, 0xc000ac8480}, {0x3f63940, 0xc0000bc4d0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0006a40b0?, {0x3f63ac0, 0xc000ac8480})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc0006a40b0, {0x3f63ac0, 0xc000ac8480})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3f63ac0, 0xc000ac8480}, {0x3f639c0, 0xc0006a40b0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc000b00690?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2099
	/usr/local/go/src/os/exec/exec.go:748 +0x92b

                                                
                                                
goroutine 2121 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc000698300, 0xc000084cb0)
	/usr/local/go/src/os/exec/exec.go:789 +0xb2
created by os/exec.(*Cmd).Start in goroutine 2099
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                    

Test pass (92/166)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.45
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 3.9
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 0.4
21 TestBinaryMirror 0.81
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
38 TestErrorSpam/start 0.68
39 TestErrorSpam/status 0.89
40 TestErrorSpam/pause 1.33
41 TestErrorSpam/unpause 1.33
42 TestErrorSpam/stop 1.42
45 TestFunctional/serial/CopySyncFile 0
47 TestFunctional/serial/AuditLog 0
49 TestFunctional/serial/KubeContext 0.05
53 TestFunctional/serial/CacheCmd/cache/add_remote 2.66
54 TestFunctional/serial/CacheCmd/cache/add_local 0.79
55 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
56 TestFunctional/serial/CacheCmd/cache/list 0.06
57 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
58 TestFunctional/serial/CacheCmd/cache/cache_reload 1.49
59 TestFunctional/serial/CacheCmd/cache/delete 0.12
64 TestFunctional/serial/LogsCmd 0.86
65 TestFunctional/serial/LogsFileCmd 0.88
68 TestFunctional/parallel/ConfigCmd 0.43
70 TestFunctional/parallel/DryRun 0.42
71 TestFunctional/parallel/InternationalLanguage 0.16
77 TestFunctional/parallel/AddonsCmd 0.16
80 TestFunctional/parallel/SSHCmd 0.64
81 TestFunctional/parallel/CpCmd 1.93
83 TestFunctional/parallel/FileSync 0.31
84 TestFunctional/parallel/CertSync 1.98
90 TestFunctional/parallel/NonActiveRuntimeDisabled 0.63
92 TestFunctional/parallel/License 0.28
95 TestFunctional/parallel/Version/short 0.07
96 TestFunctional/parallel/Version/components 0.49
98 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
99 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
100 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
101 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
102 TestFunctional/parallel/ImageCommands/ImageBuild 2.9
103 TestFunctional/parallel/ImageCommands/Setup 0.55
110 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
111 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
112 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
113 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
115 TestFunctional/parallel/ProfileCmd/profile_list 0.45
117 TestFunctional/parallel/ImageCommands/ImageRemove 0.61
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.46
123 TestFunctional/parallel/MountCmd/specific-port 1.7
124 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
128 TestFunctional/parallel/MountCmd/VerifyCleanup 1.93
132 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
133 TestFunctional/delete_echo-server_images 0.04
134 TestFunctional/delete_my-image_image 0.02
135 TestFunctional/delete_minikube_cached_images 0.02
163 TestJSONOutput/start/Audit 0
168 TestJSONOutput/pause/Command 0.45
169 TestJSONOutput/pause/Audit 0
171 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/unpause/Command 0.43
175 TestJSONOutput/unpause/Audit 0
177 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/stop/Command 1.23
181 TestJSONOutput/stop/Audit 0
183 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
185 TestErrorJSONOutput 0.22
187 TestKicCustomNetwork/create_custom_network 27.55
188 TestKicCustomNetwork/use_default_bridge_network 26.01
189 TestKicExistingNetwork 24.31
190 TestKicCustomSubnet 25.14
191 TestKicStaticIP 23.9
192 TestMainNoArgs 0.06
196 TestMountStart/serial/StartWithMountFirst 8.21
197 TestMountStart/serial/VerifyMountFirst 0.27
198 TestMountStart/serial/StartWithMountSecond 5.34
199 TestMountStart/serial/VerifyMountSecond 0.27
200 TestMountStart/serial/DeleteFirst 1.66
201 TestMountStart/serial/VerifyMountPostDelete 0.26
202 TestMountStart/serial/Stop 1.2
203 TestMountStart/serial/RestartStopped 7.36
204 TestMountStart/serial/VerifyMountPostStop 0.27
x
+
TestDownloadOnly/v1.28.0/json-events (5.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-903573 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-903573 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.44486669s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1003 17:42:35.526267   12212 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1003 17:42:35.526362   12212 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-903573
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-903573: exit status 85 (71.758625ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-903573 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-903573 │ jenkins │ v1.37.0 │ 03 Oct 25 17:42 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 17:42:30
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 17:42:30.131002   12224 out.go:360] Setting OutFile to fd 1 ...
	I1003 17:42:30.131283   12224 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 17:42:30.131293   12224 out.go:374] Setting ErrFile to fd 2...
	I1003 17:42:30.131297   12224 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 17:42:30.131499   12224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	W1003 17:42:30.131623   12224 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21625-8669/.minikube/config/config.json: open /home/jenkins/minikube-integration/21625-8669/.minikube/config/config.json: no such file or directory
	I1003 17:42:30.132154   12224 out.go:368] Setting JSON to true
	I1003 17:42:30.133139   12224 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1501,"bootTime":1759511849,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 17:42:30.133226   12224 start.go:140] virtualization: kvm guest
	I1003 17:42:30.135450   12224 out.go:99] [download-only-903573] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1003 17:42:30.135604   12224 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball: no such file or directory
	I1003 17:42:30.135669   12224 notify.go:220] Checking for updates...
	I1003 17:42:30.136889   12224 out.go:171] MINIKUBE_LOCATION=21625
	I1003 17:42:30.138021   12224 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:42:30.139247   12224 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 17:42:30.140311   12224 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 17:42:30.141305   12224 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1003 17:42:30.143162   12224 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1003 17:42:30.143374   12224 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 17:42:30.166140   12224 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 17:42:30.166198   12224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 17:42:30.541612   12224 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-03 17:42:30.53136457 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 17:42:30.541746   12224 docker.go:318] overlay module found
	I1003 17:42:30.543296   12224 out.go:99] Using the docker driver based on user configuration
	I1003 17:42:30.543336   12224 start.go:304] selected driver: docker
	I1003 17:42:30.543343   12224 start.go:924] validating driver "docker" against <nil>
	I1003 17:42:30.543437   12224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 17:42:30.596778   12224 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-03 17:42:30.585967889 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 17:42:30.596935   12224 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1003 17:42:30.597443   12224 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1003 17:42:30.597590   12224 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1003 17:42:30.599190   12224 out.go:171] Using Docker driver with root privileges
	I1003 17:42:30.600344   12224 cni.go:84] Creating CNI manager for ""
	I1003 17:42:30.600414   12224 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 17:42:30.600429   12224 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 17:42:30.600494   12224 start.go:348] cluster config:
	{Name:download-only-903573 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-903573 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 17:42:30.601638   12224 out.go:99] Starting "download-only-903573" primary control-plane node in "download-only-903573" cluster
	I1003 17:42:30.601662   12224 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 17:42:30.602905   12224 out.go:99] Pulling base image v0.0.48-1759382731-21643 ...
	I1003 17:42:30.602932   12224 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1003 17:42:30.603033   12224 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 17:42:30.619234   12224 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1003 17:42:30.619416   12224 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1003 17:42:30.619530   12224 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1003 17:42:30.629125   12224 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1003 17:42:30.629150   12224 cache.go:58] Caching tarball of preloaded images
	I1003 17:42:30.629300   12224 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1003 17:42:30.631154   12224 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1003 17:42:30.631177   12224 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1003 17:42:30.658440   12224 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1003 17:42:30.658575   12224 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1003 17:42:34.397891   12224 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1003 17:42:34.920319   12224 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1003 17:42:34.920646   12224 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/download-only-903573/config.json ...
	I1003 17:42:34.920680   12224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8669/.minikube/profiles/download-only-903573/config.json: {Name:mk12c17eaf8426601d1ef6cd2924210fe6d6819c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:42:34.920835   12224 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1003 17:42:34.921011   12224 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21625-8669/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-903573 host does not exist
	  To start a cluster, run: "minikube start -p download-only-903573"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-903573
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-455553 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-455553 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.899539273s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1003 17:42:39.861414   12212 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1003 17:42:39.861463   12212 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8669/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-455553
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-455553: exit status 85 (71.350141ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-903573 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-903573 │ jenkins │ v1.37.0 │ 03 Oct 25 17:42 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 03 Oct 25 17:42 UTC │ 03 Oct 25 17:42 UTC │
	│ delete  │ -p download-only-903573                                                                                                                                                   │ download-only-903573 │ jenkins │ v1.37.0 │ 03 Oct 25 17:42 UTC │ 03 Oct 25 17:42 UTC │
	│ start   │ -o=json --download-only -p download-only-455553 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-455553 │ jenkins │ v1.37.0 │ 03 Oct 25 17:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 17:42:36
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 17:42:36.011258   12575 out.go:360] Setting OutFile to fd 1 ...
	I1003 17:42:36.011510   12575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 17:42:36.011521   12575 out.go:374] Setting ErrFile to fd 2...
	I1003 17:42:36.011525   12575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 17:42:36.011715   12575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 17:42:36.012173   12575 out.go:368] Setting JSON to true
	I1003 17:42:36.012929   12575 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1507,"bootTime":1759511849,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 17:42:36.013024   12575 start.go:140] virtualization: kvm guest
	I1003 17:42:36.015017   12575 out.go:99] [download-only-455553] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 17:42:36.015148   12575 notify.go:220] Checking for updates...
	I1003 17:42:36.016445   12575 out.go:171] MINIKUBE_LOCATION=21625
	I1003 17:42:36.017758   12575 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:42:36.018825   12575 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 17:42:36.019834   12575 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 17:42:36.021053   12575 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1003 17:42:36.023275   12575 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1003 17:42:36.023471   12575 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 17:42:36.047815   12575 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 17:42:36.047876   12575 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 17:42:36.100393   12575 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:54 SystemTime:2025-10-03 17:42:36.091235996 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 17:42:36.100500   12575 docker.go:318] overlay module found
	I1003 17:42:36.101936   12575 out.go:99] Using the docker driver based on user configuration
	I1003 17:42:36.101968   12575 start.go:304] selected driver: docker
	I1003 17:42:36.101988   12575 start.go:924] validating driver "docker" against <nil>
	I1003 17:42:36.102094   12575 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 17:42:36.153521   12575 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:54 SystemTime:2025-10-03 17:42:36.144762657 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 17:42:36.153696   12575 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1003 17:42:36.154197   12575 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1003 17:42:36.154326   12575 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1003 17:42:36.156099   12575 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-455553 host does not exist
	  To start a cluster, run: "minikube start -p download-only-455553"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-455553
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.4s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-423289 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-423289" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-423289
--- PASS: TestDownloadOnlyKic (0.40s)

                                                
                                    
x
+
TestBinaryMirror (0.81s)

                                                
                                                
=== RUN   TestBinaryMirror
I1003 17:42:40.967124   12212 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-626924 --alsologtostderr --binary-mirror http://127.0.0.1:44037 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-626924" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-626924
--- PASS: TestBinaryMirror (0.81s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-051972
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-051972: exit status 85 (61.868049ms)

                                                
                                                
-- stdout --
	* Profile "addons-051972" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-051972"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-051972
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-051972: exit status 85 (62.436671ms)

                                                
                                                
-- stdout --
	* Profile "addons-051972" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-051972"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestErrorSpam/start (0.68s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-093146 --log_dir /tmp/nospam-093146 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-093146 --log_dir /tmp/nospam-093146 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-093146 --log_dir /tmp/nospam-093146 start --dry-run
--- PASS: TestErrorSpam/start (0.68s)

                                                
                                    
x
+
TestErrorSpam/status (0.89s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-093146 --log_dir /tmp/nospam-093146 status
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-093146 --log_dir /tmp/nospam-093146 status: exit status 6 (296.67469ms)

                                                
                                                
-- stdout --
	nospam-093146
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 17:59:41.563640   24295 status.go:458] kubeconfig endpoint: get endpoint: "nospam-093146" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-093146 --log_dir /tmp/nospam-093146 status" failed: exit status 6
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-093146 --log_dir /tmp/nospam-093146 status
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-093146 --log_dir /tmp/nospam-093146 status: exit status 6 (294.025462ms)

                                                
                                                
-- stdout --
	nospam-093146
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 17:59:41.857751   24406 status.go:458] kubeconfig endpoint: get endpoint: "nospam-093146" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-093146 --log_dir /tmp/nospam-093146 status" failed: exit status 6
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-093146 --log_dir /tmp/nospam-093146 status
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-093146 --log_dir /tmp/nospam-093146 status: exit status 6 (294.317754ms)

                                                
                                                
-- stdout --
	nospam-093146
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 17:59:42.152148   24516 status.go:458] kubeconfig endpoint: get endpoint: "nospam-093146" does not appear in /home/jenkins/minikube-integration/21625-8669/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-093146 --log_dir /tmp/nospam-093146 status" failed: exit status 6
--- PASS: TestErrorSpam/status (0.89s)

                                                
                                    
x
+
TestErrorSpam/pause (1.33s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-093146 --log_dir /tmp/nospam-093146 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-093146 --log_dir /tmp/nospam-093146 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-093146 --log_dir /tmp/nospam-093146 pause
--- PASS: TestErrorSpam/pause (1.33s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.33s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-093146 --log_dir /tmp/nospam-093146 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-093146 --log_dir /tmp/nospam-093146 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-093146 --log_dir /tmp/nospam-093146 unpause
--- PASS: TestErrorSpam/unpause (1.33s)

                                                
                                    
x
+
TestErrorSpam/stop (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-093146 --log_dir /tmp/nospam-093146 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-093146 --log_dir /tmp/nospam-093146 stop: (1.215146704s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-093146 --log_dir /tmp/nospam-093146 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-093146 --log_dir /tmp/nospam-093146 stop
--- PASS: TestErrorSpam/stop (1.42s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21625-8669/.minikube/files/etc/test/nested/copy/12212/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-889240 /tmp/TestFunctionalserialCacheCmdcacheadd_local1846423951/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 cache add minikube-local-cache-test:functional-889240
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 cache delete minikube-local-cache-test:functional-889240
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-889240
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-889240 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (273.717427ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.86s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 logs
--- PASS: TestFunctional/serial/LogsCmd (0.86s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.88s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 logs --file /tmp/TestFunctionalserialLogsFileCmd2411654273/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-889240 config get cpus: exit status 14 (74.194293ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-889240 config get cpus: exit status 14 (77.596395ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-889240 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-889240 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (187.73947ms)

                                                
                                                
-- stdout --
	* [functional-889240] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21625
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:26:52.951060   58525 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:26:52.951389   58525 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:26:52.951402   58525 out.go:374] Setting ErrFile to fd 2...
	I1003 18:26:52.951408   58525 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:26:52.951689   58525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:26:52.952244   58525 out.go:368] Setting JSON to false
	I1003 18:26:52.953436   58525 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4164,"bootTime":1759511849,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:26:52.953552   58525 start.go:140] virtualization: kvm guest
	I1003 18:26:52.955560   58525 out.go:179] * [functional-889240] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 18:26:52.957522   58525 notify.go:220] Checking for updates...
	I1003 18:26:52.957537   58525 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:26:52.958946   58525 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:26:52.960391   58525 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:26:52.961530   58525 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:26:52.963024   58525 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:26:52.964105   58525 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:26:52.966743   58525 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:26:52.967449   58525 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:26:52.997739   58525 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:26:52.997895   58525 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:26:53.058833   58525 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-03 18:26:53.049159643 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:26:53.059019   58525 docker.go:318] overlay module found
	I1003 18:26:53.060758   58525 out.go:179] * Using the docker driver based on existing profile
	I1003 18:26:53.062302   58525 start.go:304] selected driver: docker
	I1003 18:26:53.062317   58525 start.go:924] validating driver "docker" against &{Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:26:53.062435   58525 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:26:53.065704   58525 out.go:203] 
	W1003 18:26:53.066998   58525 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1003 18:26:53.068236   58525 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-889240 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-889240 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-889240 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (159.410361ms)

                                                
                                                
-- stdout --
	* [functional-889240] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21625
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:26:53.356472   58930 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:26:53.356745   58930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:26:53.356756   58930 out.go:374] Setting ErrFile to fd 2...
	I1003 18:26:53.356762   58930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:26:53.357062   58930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
	I1003 18:26:53.357508   58930 out.go:368] Setting JSON to false
	I1003 18:26:53.358398   58930 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4164,"bootTime":1759511849,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:26:53.358491   58930 start.go:140] virtualization: kvm guest
	I1003 18:26:53.360378   58930 out.go:179] * [functional-889240] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1003 18:26:53.361688   58930 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:26:53.361693   58930 notify.go:220] Checking for updates...
	I1003 18:26:53.363055   58930 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:26:53.364385   58930 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig
	I1003 18:26:53.365536   58930 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube
	I1003 18:26:53.366672   58930 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:26:53.367760   58930 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:26:53.369355   58930 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:26:53.369795   58930 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:26:53.393358   58930 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1003 18:26:53.393501   58930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:26:53.449005   58930 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-03 18:26:53.436272745 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1003 18:26:53.449135   58930 docker.go:318] overlay module found
	I1003 18:26:53.451084   58930 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1003 18:26:53.452223   58930 start.go:304] selected driver: docker
	I1003 18:26:53.452240   58930 start.go:924] validating driver "docker" against &{Name:functional-889240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-889240 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:26:53.452344   58930 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:26:53.454148   58930 out.go:203] 
	W1003 18:26:53.455299   58930 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1003 18:26:53.456336   58930 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 ssh -n functional-889240 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 cp functional-889240:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd328528260/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 ssh -n functional-889240 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 ssh -n functional-889240 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/12212/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 ssh "sudo cat /etc/test/nested/copy/12212/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/12212.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 ssh "sudo cat /etc/ssl/certs/12212.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/12212.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 ssh "sudo cat /usr/share/ca-certificates/12212.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/122122.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 ssh "sudo cat /etc/ssl/certs/122122.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/122122.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 ssh "sudo cat /usr/share/ca-certificates/122122.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-889240 ssh "sudo systemctl is-active docker": exit status 1 (317.726989ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-889240 ssh "sudo systemctl is-active containerd": exit status 1 (309.628434ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-889240 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-889240 image ls --format short --alsologtostderr:
I1003 18:26:57.597660   61841 out.go:360] Setting OutFile to fd 1 ...
I1003 18:26:57.597904   61841 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 18:26:57.597913   61841 out.go:374] Setting ErrFile to fd 2...
I1003 18:26:57.597917   61841 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 18:26:57.598119   61841 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
I1003 18:26:57.598685   61841 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1003 18:26:57.598777   61841 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1003 18:26:57.599145   61841 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
I1003 18:26:57.618503   61841 ssh_runner.go:195] Run: systemctl --version
I1003 18:26:57.618560   61841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
I1003 18:26:57.636721   61841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
I1003 18:26:57.738184   61841 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-889240 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-889240 image ls --format table --alsologtostderr:
I1003 18:26:58.296300   62230 out.go:360] Setting OutFile to fd 1 ...
I1003 18:26:58.296564   62230 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 18:26:58.296575   62230 out.go:374] Setting ErrFile to fd 2...
I1003 18:26:58.296579   62230 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 18:26:58.296797   62230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
I1003 18:26:58.297620   62230 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1003 18:26:58.297884   62230 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1003 18:26:58.299067   62230 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
I1003 18:26:58.319732   62230 ssh_runner.go:195] Run: systemctl --version
I1003 18:26:58.319776   62230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
I1003 18:26:58.338493   62230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
I1003 18:26:58.439404   62230 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-889240 image ls --format json --alsologtostderr:
[{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449
f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cba
dc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/
kube-proxy:v1.34.1"],"size":"73138073"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-889240 image ls --format json --alsologtostderr:
I1003 18:26:58.067420   62083 out.go:360] Setting OutFile to fd 1 ...
I1003 18:26:58.067707   62083 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 18:26:58.067717   62083 out.go:374] Setting ErrFile to fd 2...
I1003 18:26:58.067721   62083 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 18:26:58.067954   62083 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
I1003 18:26:58.068741   62083 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1003 18:26:58.068872   62083 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1003 18:26:58.069336   62083 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
I1003 18:26:58.088147   62083 ssh_runner.go:195] Run: systemctl --version
I1003 18:26:58.088192   62083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
I1003 18:26:58.107954   62083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
I1003 18:26:58.209517   62083 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-889240 image ls --format yaml --alsologtostderr:
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-889240 image ls --format yaml --alsologtostderr:
I1003 18:26:57.828162   61947 out.go:360] Setting OutFile to fd 1 ...
I1003 18:26:57.828567   61947 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 18:26:57.828577   61947 out.go:374] Setting ErrFile to fd 2...
I1003 18:26:57.828583   61947 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 18:26:57.828792   61947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
I1003 18:26:57.829358   61947 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1003 18:26:57.829475   61947 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1003 18:26:57.829831   61947 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
I1003 18:26:57.847950   61947 ssh_runner.go:195] Run: systemctl --version
I1003 18:26:57.848023   61947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
I1003 18:26:57.865463   61947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
I1003 18:26:57.967174   61947 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-889240 ssh pgrep buildkitd: exit status 1 (279.222487ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 image build -t localhost/my-image:functional-889240 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-889240 image build -t localhost/my-image:functional-889240 testdata/build --alsologtostderr: (2.393062165s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-889240 image build -t localhost/my-image:functional-889240 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 1955a07cc36
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-889240
--> 2185e20ab1c
Successfully tagged localhost/my-image:functional-889240
2185e20ab1ccd3aa0ec19206928d757c8199792a7f03587a834470eb249c8ca0
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-889240 image build -t localhost/my-image:functional-889240 testdata/build --alsologtostderr:
I1003 18:26:58.317701   62237 out.go:360] Setting OutFile to fd 1 ...
I1003 18:26:58.318065   62237 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 18:26:58.318077   62237 out.go:374] Setting ErrFile to fd 2...
I1003 18:26:58.318083   62237 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 18:26:58.318330   62237 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8669/.minikube/bin
I1003 18:26:58.319081   62237 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1003 18:26:58.319771   62237 config.go:182] Loaded profile config "functional-889240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1003 18:26:58.320197   62237 cli_runner.go:164] Run: docker container inspect functional-889240 --format={{.State.Status}}
I1003 18:26:58.338279   62237 ssh_runner.go:195] Run: systemctl --version
I1003 18:26:58.338325   62237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-889240
I1003 18:26:58.356347   62237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21625-8669/.minikube/machines/functional-889240/id_rsa Username:docker}
I1003 18:26:58.457602   62237 build_images.go:161] Building image from path: /tmp/build.2631368726.tar
I1003 18:26:58.457697   62237 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1003 18:26:58.466916   62237 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2631368726.tar
I1003 18:26:58.470488   62237 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2631368726.tar: stat -c "%s %y" /var/lib/minikube/build/build.2631368726.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2631368726.tar': No such file or directory
I1003 18:26:58.470514   62237 ssh_runner.go:362] scp /tmp/build.2631368726.tar --> /var/lib/minikube/build/build.2631368726.tar (3072 bytes)
I1003 18:26:58.487607   62237 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2631368726
I1003 18:26:58.494995   62237 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2631368726 -xf /var/lib/minikube/build/build.2631368726.tar
I1003 18:26:58.502697   62237 crio.go:315] Building image: /var/lib/minikube/build/build.2631368726
I1003 18:26:58.502769   62237 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-889240 /var/lib/minikube/build/build.2631368726 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1003 18:27:00.631864   62237 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-889240 /var/lib/minikube/build/build.2631368726 --cgroup-manager=cgroupfs: (2.129063659s)
I1003 18:27:00.631951   62237 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2631368726
I1003 18:27:00.639503   62237 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2631368726.tar
I1003 18:27:00.646646   62237 build_images.go:217] Built localhost/my-image:functional-889240 from /tmp/build.2631368726.tar
I1003 18:27:00.646692   62237 build_images.go:133] succeeded building to: functional-889240
I1003 18:27:00.646699   62237 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-889240
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "358.070056ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "88.838582ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 image rm kicbase/echo-server:functional-889240 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "392.855391ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "62.60106ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-889240 /tmp/TestFunctionalparallelMountCmdspecific-port3898317380/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-889240 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (324.334237ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1003 18:26:53.072283   12212 retry.go:31] will retry after 319.036264ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-889240 /tmp/TestFunctionalparallelMountCmdspecific-port3898317380/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-889240 ssh "sudo umount -f /mount-9p": exit status 1 (283.741829ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-889240 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-889240 /tmp/TestFunctionalparallelMountCmdspecific-port3898317380/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-889240 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-889240 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1298239377/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-889240 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1298239377/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-889240 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1298239377/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 ssh "findmnt -T" /mount1
I1003 18:26:54.762710   12212 retry.go:31] will retry after 5.476663362s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-889240 ssh "findmnt -T" /mount1: exit status 1 (352.003161ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1003 18:26:54.802241   12212 retry.go:31] will retry after 645.114993ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-889240 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-889240 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-889240 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1298239377/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-889240 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1298239377/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-889240 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1298239377/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-889240 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-889240
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-889240
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-889240
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.45s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-553665 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.45s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.43s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-553665 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.43s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.23s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-553665 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-553665 --output=json --user=testUser: (1.229512196s)
--- PASS: TestJSONOutput/stop/Command (1.23s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-023222 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-023222 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (75.535931ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"47174e41-9de2-4473-835a-66a6293c5578","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-023222] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"61b4c2a0-ba6d-4354-80fe-db1b90248cc2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21625"}}
	{"specversion":"1.0","id":"cba0a7b2-d2cd-490d-8483-11d8f6f0d543","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"da895155-bc32-4f1c-aa5c-0146eecaeac3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21625-8669/kubeconfig"}}
	{"specversion":"1.0","id":"98ac7f81-5eba-4e59-b512-c6299dc279b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8669/.minikube"}}
	{"specversion":"1.0","id":"343f064c-4e3c-448e-867b-bac79bd0d827","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"c5424f24-1f3b-402f-94ef-8f8f5952322f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1c9152ef-51e9-44d1-b51b-7b4f5f24712c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-023222" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-023222
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (27.55s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-781121 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-781121 --network=: (25.434507274s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-781121" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-781121
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-781121: (2.092314731s)
--- PASS: TestKicCustomNetwork/create_custom_network (27.55s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.01s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-171010 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-171010 --network=bridge: (24.066627142s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-171010" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-171010
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-171010: (1.920222117s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.01s)

                                                
                                    
x
+
TestKicExistingNetwork (24.31s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1003 19:03:34.779544   12212 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1003 19:03:34.796716   12212 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1003 19:03:34.796808   12212 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1003 19:03:34.796838   12212 cli_runner.go:164] Run: docker network inspect existing-network
W1003 19:03:34.813157   12212 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1003 19:03:34.813185   12212 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1003 19:03:34.813225   12212 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1003 19:03:34.813371   12212 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1003 19:03:34.830426   12212 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000013fb0}
I1003 19:03:34.830478   12212 network_create.go:124] attempt to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1003 19:03:34.830520   12212 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1003 19:03:34.885598   12212 network_create.go:108] docker network existing-network 192.168.49.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-618191 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-618191 --network=existing-network: (22.241573596s)
helpers_test.go:175: Cleaning up "existing-network-618191" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-618191
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-618191: (1.924436239s)
I1003 19:03:59.069061   12212 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.31s)

                                                
                                    
x
+
TestKicCustomSubnet (25.14s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-077545 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-077545 --subnet=192.168.60.0/24: (23.025758023s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-077545 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-077545" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-077545
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-077545: (2.098762328s)
--- PASS: TestKicCustomSubnet (25.14s)

                                                
                                    
x
+
TestKicStaticIP (23.9s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-730718 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-730718 --static-ip=192.168.200.200: (21.67177024s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-730718 ip
helpers_test.go:175: Cleaning up "static-ip-730718" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-730718
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-730718: (2.085390246s)
--- PASS: TestKicStaticIP (23.90s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.21s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-057474 --memory=3072 --mount-string /tmp/TestMountStartserial3155199568/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-057474 --memory=3072 --mount-string /tmp/TestMountStartserial3155199568/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.209513889s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-057474 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.34s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-075053 --memory=3072 --mount-string /tmp/TestMountStartserial3155199568/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-075053 --memory=3072 --mount-string /tmp/TestMountStartserial3155199568/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.338425087s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-075053 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-057474 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-057474 --alsologtostderr -v=5: (1.662742625s)
--- PASS: TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-075053 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-075053
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-075053: (1.196008247s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.36s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-075053
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-075053: (6.360823806s)
--- PASS: TestMountStart/serial/RestartStopped (7.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-075053 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    

Test skip (18/166)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
Copied to clipboard